California Blocks Landmark AI Safety Bill
The US Governor of California, Gavin Newsom, has vetoed a landmark Artificial Intelligence (AI) safety bill, claiming it could stifle innovation and prompt AI developers to move out of the state.
This bill, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (officially known as SB 1047), had had strong opposition from a number of major technology companies.
The proposed legislation would have imposed some of the first regulations on AI in the US.
Newsom has said that the bill could stop creative work and prompt AI developers to move away.
Senator Scott Wiener, who authored the bill, said the veto allows companies to continue developing an "extremely powerful technology" without any government oversight.
The bill would have required the most advanced AI models to undergo safety testing.
It would have forced developers to ensure their technology included a so-called "kill switch". This would allow organisations to isolate and effectively switch off an AI system if it became a threat.
It would also have made official oversight compulsory for the development of so-called Frontier Models, or the most powerful AI systems.
The bill "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said.
"Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it," he added.
Newsom also announced plans to protect the public from the risks of AI and asked leading experts to help develop safeguards for the technology.
Recently Newsom has also signed 17 bills, including legislation aimed at cracking down on misinformation and so-called deep fakes, which include images, video, or audio content created using generative AI.
Dr. Kjell Carlsson, Head of AI strategy at Domino Data Lab has said:
“With Gov. Newsom’s veto of California's SB 1047, enterprise leaders should seize this opportunity to proactively address AI risks and protect their AI initiatives now. Rather than wait for regulation to dictate safety measures, organisations should enact robust AI governance practices across the entire AI lifecycle: establishing controls over access to data, infrastructure and models; rigorous model testing and validation, and ensuring output auditability and reproducibility.
“By embedding these governance practices into AI workflows from the start, companies can protect against misuse, mitigate risks, and show accountability, putting them ahead of potential future regulations (and competitors). Together, these actions prepare enterprises for inevitable compliance, build trust with stakeholders, and foster a culture of responsible AI adoption that drives business impact.
“Enterprises should also actively advocate for one federal regulatory approach that addresses real-world AI threats without stifling innovation. SB 1047’s failure highlights the dangers of fragmented, state-level regulations that often fail to address current risks and create a costly compliance landscape.
“Companies should use this regulatory pause to engage with policymakers and promote a unified, adaptable federal framework that evolves with AI technology. By supporting regulations focusing on actual threats, such as fraud, misinformation, and misuse by bad actors, AI leaders can help shape a regulatory environment that balances innovation with safety.
“In doing so, they can avoid the pitfalls of reactive compliance, instead contributing to building new standards to protect society while allowing the US to maintain its AI leadership,” he said.
Advanced AI Firms
And California has many of the world's most advanced AI companies, including the ChatGPT maker, OpenAI.
Chat GPT creator OpenAI also warned that the law would threaten AI’s growth, and that SB 1047 should be regulated by federal government rather than nationally.
The state's role as a hub for many of the world's largest tech firms means that any bill regulating the sector would have a major national and global impact on the industry.
BBC | Domino | Office of the Governor | Governor Gavin Newsom | Tech Informed
Firstpost | AP News
You Might Also Read:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible