Skip to main content

As concerns over the ethics and responsible use of AI technologies grow globally, regulatory bodies and state and federal agencies are exploring solutions to address them. Yet, there are challenges.

On Sunday, California’s Governor Gavin Newsom rejected a bill to establish the nation’s first safety measures for large Artificial Intelligence (AI) models.

Bill 1047’s aim was to introduce safeguards to prevent AI-related risks, such as cyberattacks on critical infrastructure, the development of weapons, or automated crime. It would have required companies to test their models and publicly disclose safety protocols to prevent AI from being exploited for threats like disabling the electric grid or facilitating the creation of chemical weapons — risks experts warn could become more likely with technological advancements.

These scenarios are not far-fetched, as recently, the popular generative AI, ChatGPT, was manipulated into providing instructions for making explosives.

The bill targeted systems that require high computing power and cost over $100 million to develop.

Supporters of the bill, including Elon Musk and Anthropic, an AI research company, believed that the bill could have introduced much-needed transparency and accountability for large-scale AI models. On the other hand, opponents such as OpenAI, Google, and Meta argued that the bill would “kill California tech” and discourage AI developers from investing in large models or sharing open-source software.

In his statement, the governor gave a number of reasons for his decision not to sign the bill. He stated that the bill’s focus on large and expensive AI models could create a false sense of security since smaller models could pose similar risks. Therefore, AI regulation should be based on actual risks, not on model size or cost, as smaller, specialized models could be just as, if not more, dangerous.

He also noted that this narrow focus could stifle innovation without addressing the real threats posed by AI technology. Additionally, he stated his commitment to maintaining California’s reputation as a global leader in AI development, highlighting that the state is home to 32 of the world’s 50 leading AI companies.

The bill’s author, Senator Scott Wiener, expressed disappointment over the decision, calling it a setback for those advocating for oversight of large corporations that make critical decisions that impact public safety.

He stated, “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public.”

The rejection of this bill illustrates the ongoing struggle to balance AI security and innovation. While developers should be allowed to advance AI technology, prioritising its responsible use with regard to public safety should also be paramount.

About the author: