Skip to main content

In a landmark move for artificial intelligence governance, leading AI companies OpenAI and Anthropic have reached an agreement with the US AI Safety Institute to conduct pre-release testing of their future AI models. This collaboration marks a significant step towards ensuring the safety and reliability of advanced AI systems at a national level.

The agreement, announced on August 29, 2024, grants the US AI Safety Institute access to major new models from both companies before and after their public release. This unprecedented level of cooperation between private AI developers and a government body aims to evaluate capabilities, identify potential risks, and develop mitigation strategies for emerging AI technologies.

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” said Sam Altman, CEO of OpenAI, in a post on X, emphasizing the company’s commitment to responsible AI development.

The US AI Safety Institute, part of the National Institute of Standards and Technology (NIST), was established following President Biden’s executive order on AI in October 2023. This initiative reflects growing concerns about the rapid advancement of AI technology and the need for robust safety measures.

As AI continues to evolve and impact various sectors, this agreement sets a precedent for national-level oversight and could influence similar approaches in other countries. It also addresses calls from AI researchers and developers for increased transparency and accountability in the industry.

The move comes in the wake of an open letter published by current and former OpenAI employees earlier in June, 2024. The letter highlighted potential problems with rapid AI advancements and a lack of oversight. It stated, “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.” They also added that AI companies “currently have only weak obligations to share some of this information with governments, and none with civil society,” and cannot be “relied upon to share it voluntarily.”

This collaboration between AI giants and government regulators eliminates these concerns and presents a balanced approach to fostering responsible AI development while maintaining technological progress.

About the author: