OpenAI has announced the formation of a new “Safety and Security Committee” to evaluate the company’s processes and safeguards as it begins training its most advanced model yet on the path to Artificial General Intelligence (AGI).
The four-member committee led by board chair Bret Taylor, CEO Sam Altman, and directors Adam D’Angelo and Nicole Seligman will spend the next 90 days rigorously stress-testing OpenAI’s current safety protocols and guardrails. They’ll be assisted by senior OpenAI technical leads like Chief Scientist Jakub Pachocki and Heads of Security, Safety Systems, Preparedness and Alignment Science. The committee will also consult external experts including former cybersecurity officials Rob Joyce and John Carlin.
After the 90-day review period, the committee will present its recommendations to OpenAI’s full board before the company shares adopted measures publicly.
This safety-focused pivot comes as OpenAI reveals it has “recently begun training its next frontier model” that it anticipates will elevate its capabilities to “the next level” towards the long-sought prize of AGI – a system that will have the intelligence level of humans, they will be able to reason, learn, and be very creative across multiple domains.
While OpenAI says it’s “proud to build and release models that are industry-leading on both capabilities and safety,” many AI ethics experts insist current safeguards are vastly inadequate for the looming risks of advanced AI systems like AGI.
Critics argue OpenAI must rebuild public trust that it genuinely prioritizes safety, as a short internal review cannot suffice. The creation of a highly advanced AGI system could prove pivotal in determining the long-term trajectory of intelligent systems. There should be zero margin for error in aligning such a system with human ethics and values.
For the next few months, all eyes will be on OpenAI’s 90-day evaluation and its resulting safety recommendations and measures. The company’s decisions over the next three months could rank among the most groundbreaking events in modern history in determining whether AGI becomes a blessing for humankind or a catastrophe.