California's Controversial AI Bill Will Soon Be Law
A contentious bill to regulate the Artificial Intelligence (AI) industry, SB-1047, has been passed by the California's State Assembly Appropriations Committee.
It will pass the California Senate by the end of this month before going to the Democrat Governor, Gavin Newsom, for signature to pass into law.
The most controversial part of the debate is the question of who is legally responsible and takes the blame if the AI causes harm - should the AI system be blamed or the person who used the AI? That is the question that runs through the political debate over SB-1047, and the larger question of how to regulate the technology.
This type of debate happened recently when X released the second generation of its AI model, Grok, which has an image generation feature similar to OpenAI’s DALL-E. X is known for its slack approach to content moderation, and the latest version of Grok has faced similar criticism of its training model.
The bill’s supporters say it will create controls to prevent rapidly advancing AI models from causing disastrous incidents, such as shutting down critical infrastructure. Their main concern is that the technology is developing faster than its human creators can control.
The California’s AI Act is particularly important as SB-1047 will set the precedent for state guidelines across the US in setting down the rules for developers working on generative AI.
The key points of the proposed legislation are:-
- Create safety and security protocols for covered AI models.
- Ensure such models could be shut down completely.
- Prevent the distribution of models capable of what the act defines as “critical harm.”
- Retain an auditor to ensure compliance with the act.
These issues are not new. In the 1990s, Internet service providers like Prodigy and Compuserve faced lawsuits related to potentially libellous material that their users had posted. The US 1996 Communications Decency Act protects the freedom of expression online by shielding intermediaries from civil liability for third-party content. The intention was to protect the freedom of expression online by shielding intermediaries from civil liability for third-party content and to specify that technology companies, in most cases, cannot be held legally liable for what their users post.
Technology companies would love to see a kind of Section 230 for AI, making them immune to prosecution for what their users do with their AI tools. However, the California bill takes the opposite approach, placing responsibility on the technology companies to assure the government that their products won’t be used to create harm.
SB-1047 does have some widely accepted provisions, such as adding legal protections for whistleblowers at AI companies, and studying the feasibility of building a public AI cloud that startups and researchers could use. More controversially, it requires makers of large AI models to notify the government when they train a model that exceeds a certain computing threshold and costs more than $100 million.
It allows the California attorney general to seek an injunction against companies that release models that the AG considers unsafe. It also requires that large models have a “kill switch” that allows developers to stop them in the case of danger.
State of California | Platformer | The Verge | Techrepublic | LA Times | Wikipedia
Image: Ideogram
You Might Also Read:
UK vs. US: The Artificial Intelligence Landscapes Compared:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible