On Tuesday, OpenAI announced the establishment of a new safety committee and confirmed the commencement of training for its next major AI model. This move responds to internal and external calls for stronger safety measures as the company advances its AI capabilities.
Why It Matters
OpenAI has recently experienced a series of key departures, with employees voicing concerns that the company is not committing enough resources to the long-term safety of its AI projects. The formation of the new safety committee aims to address these issues and bolster confidence in OpenAI’s commitment to secure and ethical AI development.
The New Safety Committee
OpenAI’s newly formed safety and security committee will be led by outside chairman Bret Taylor, including board members Adam D'Angelo, Nicole Seligman, and Sam Altman. Additional key members include:
Aleksander Mądry: Head of Preparedness
Lilian Weng: Head of Safety Systems
John Schulman: Co-founder
Matt Knight: Security Chief
Jakub Pachocki: Chief Scientist
The committee's first task is to evaluate and enhance OpenAI’s safety protocols and safeguards over the next 90 days. The company plans to consult with outside safety and security experts, such as former cybersecurity official Rob Joyce and former top DOJ official John Carlin. The committee will be advisory, making recommendations to the board.
Advancing AI Models
In addition to safety measures, OpenAI confirmed that it has begun training its next large language model. This announcement follows hints from both Microsoft and OpenAI that training was already underway. OpenAI CTO Mira Murati previously stated that a significant update to the underlying model, the successor to GPT-4, would be revealed later this year.
At Microsoft’s Build conference, CTO Kevin Scott suggested that the new model would be significantly larger than GPT-4, compared to a giant whale, whereas GPT-4 was likened to an Orca.
Internal Dynamics
The formation of the safety committee follows the resignations of key figures, including co-founder Ilya Sutskever and Jan Leike, who were leading the long-term safety initiative known as "superalignment." Leike criticized OpenAI for not adequately supporting his team's work. Policy researcher Gretchen Krueger also announced her departure, echoing similar concerns.
Ensuring Security and Trust
OpenAI is making efforts to reassure stakeholders of its dedication to security and ethical AI practices. Co-founder John Schulman has taken on an expanded role as head of alignment science, overseeing both immediate safety measures and long-term superalignment research. This restructuring ensures that future AI systems with superhuman capabilities adhere to human values and norms.
OpenAI plans to consolidate its safety and alignment work within its research unit and increase investment in this critical area. The company remains committed to addressing valid criticisms and fulfilling its promises to regulatory bodies and stakeholders.
The Road Ahead
OpenAI’s proactive steps in forming a safety committee and commencing the training of its next major AI model reflect its commitment to responsible AI development. By addressing internal criticisms and enhancing safety protocols, OpenAI aims to lead the AI industry with innovations that prioritize both advancement and security.
As the company continues to evolve its AI models, the world will watch to see how effectively it balances innovation with ethical considerations and safety standards.