OpenAI launches safety and security committee

OpenAI launches safety and security committee

OpenAI’s board has established a safety and security committee to evaluate its operations as it trains its next AI model.

In a statement made on Tuesday, OpenAI announced the creation of a new safety committee, which will be important in evaluating and enhancing the organisation’s procedures and security measures. The committee’s first objective is to thoroughly assess OpenAI’s present systems and processes for 90 days.

This comprehensive analysis will assist in identifying areas needing growth and guide the creation of further safeguards to guarantee the moral and responsible advancement of AI technologies.

Read also: OpenAI unveils free chatgpt-40 for global audience

OpenAI exhibits its commitment to responsible innovation and limiting its technology’s potential negative implications by prioritising safety and taking preventive steps to address possible risks. The safety committee’s conclusions and suggestions will be crucial in directing OpenAI’s future work.

OpenAI Addresses Safety Concerns

This comes amidst global safety concerns as AI models become powerful in creating texts and generating images.

The members of the OpenAI board who will chair the committee are Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman, the CEO. OpenAI claims that the committee will advise the entire board on crucial safety and security choices pertaining to the business’s initiatives and activities. Concerns concerning OpenAI’s handling of the possible risks associated with the technology are being raised in light of its recent rapid advancements in AI.

The committee’s analysis and recommendations are necessary because the corporation has started training a new model that could be more potent than ChatGPT-4 and ChatGPT-4o.

In addition, after 90 days, the safety and security committee will share its recommendations with the full Board.

Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,

The committee also includes its technical and policy specialists, Matt Knight, Head of Security; John Schulman, Head of Alignment Science; Aleksander Madry, Head of Preparedness; and Chief Scientist Jakub Pachocki, according to OpenAI.

The business announced that to support this work, it will hire and consult with more safety, security, and technological experts. These specialists include John Carlin, Rob Joyce, a security advisor to OpenAI, and former cybersecurity officials.

Read also: Vambo AI connects languages with Artificial Intelligence

Background Information on OpenAI

The recent elimination of a team tasked with guaranteeing the security of potential future ultra-capable artificial intelligence systems was followed by the creation a new safety committee. The organisation was dissolved after the exit of the team’s two leaders—Ilya Sutskever, head scientist and co-founder of OpenAI. Under the direction of Sutskever and another OpenAI guru, Jan Leike, the team was established less than a year ago.

The superalignment team at OpenAI was led by scientists and concentrated on long-term risks from superhuman AI. Following his resignation, Leike noted that his division was “having trouble” finding computing resources inside OpenAI.

Leave a Reply

Your email address will not be published. Required fields are marked *