As Artificial Intelligence rapidly evolves from basic AI to Generative Models and potentially towards Artificial General Intelligence (AGI), concerns about its safety and security are growing. These worries have led to numerous calls for stricter AI laws, regulations and policies. In the USA, California has taken a bold step with the introduction of Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
On May 21, the California Senate passed a bill SB 1047, authored by California state Sen. Scott Wiener. This bill aims to regulate the development and use of advance AI models to ensure safety and security while promoting responsible innovation. The California AI Bill is slated for a California Assembly vote this August.
The California AI Bill requires developers of powerful AI models to implement safety measures, certify compliance, and report incidents to a new government department called the Frontier Model Division. SB 1047 also regulates computing clusters used for AI development, provides whistleblower protections, and establishes new oversight bodies including the Board of Frontier Models and the Frontier Model Division. It creates a public AI research platform called CalCompute and sets up a funding mechanism for enforcement. The Attorney General or Labor Commissioner can take legal action against those who break these rules.
However, despite its well-intentioned goals, it has sparked intense debate within the AI community. There has been several criticisms of this bill by various AI companies, investors, founders and open source communities. The California AI Bill has ignited a debate about its purpose and effect.
Dr. Fei-Fei Li, often referred to as the “Godmother of AI,” warns that this legislation could have significant unintended consequences not just for California, but for the entire country. Along with many critics of the Bill, she argues that the bill might unfairly punish developers, especially smaller ones and entrepreneurs, by holding them liable for any misuse of their AI models. This, she believes, could stifle innovation by forcing developers to act defensively and limit their creative output.
Another concern raised by Dr. Li is that the bill’s requirement for a “kill switch” in certain AI models could significantly harm open-source development. Critics argue this could harm open-source development, which has been crucial for many technological advancements across various sectors. She accuses the bill of not addressing the potential harms of AI advancement, including bias and deepfakes.
Sen. Scott Wiener made it clear in a conversation with Vox that the proposed legislation simply requires developers to conduct safety testing that they claim to already do or plan to do. If this testing reveals significant catastrophic risks, developers must implement mitigations to reduce these risks. The senator emphasizes that effective AI policy must encourage innovation while setting appropriate restrictions and addressing their implications.
The senator emphasizes that the bill’s liability provisions are similar to existing laws in other industries, where negligence must be proven. He also highlights an amendment that exempts developers from responsibility for shutting down models once they’re no longer in their possession, addressing concerns about open-source development.
Anjney Midha argue that AI regulations should focus on misuse and malicious users, not the underlying models or infrastructure. He believes this is the fundamental flaw in the proposed bill, which attempts to regulate the models themselves. Midha suggests that the current legislative efforts are overly concerned with theoretical AI safety issues when the real focus should be on AI security.
The debate surrounding SB 1047 highlights the challenges in crafting effective AI regulations that encourages innovation while addressing potential risks. It demonstrates the importance of collaboration between legislators, public bodies, and tech companies to create more effective and sustainable regulations. As AI continues to advance, it’s crucial to develop laws and regulations that address the root causes of AI misuse and insecurity. This may require a global effort to create AI policies that protect society while promoting technological progress. The outcome of California’s AI Bill could serve as an example for future AI regulation worldwide.