## Why California’s New AI Safety Law Succeeded Where SB 1047 Failed
California’s journey to regulate AI has been marked by both ambition and pragmatism. The stark contrast between the fate of SB 1047 and the success of a more recent, targeted AI safety law highlights key lessons in legislative strategy and industry engagement.
**SB 1047’s Overreach:** Senator Scott Wiener’s SB 1047 was designed to be a groundbreaking piece of legislation, aiming to impose stringent pre-deployment safety testing, “kill switch” requirements, and liability on developers of frontier AI models. While laudable in its intent to mitigate catastrophic risks, its broad scope and prescriptive mandates drew significant backlash. Critics, including tech industry leaders and some AI experts, argued it was overly ambitious, technically unfeasible, would stifle innovation, and could drive AI development out of California. Its “regulate everything at once” approach proved too contentious, leading to its eventual failure.
**The Path to Practical Success:** In contrast, successful AI safety legislation has generally adopted a more incremental and targeted approach. These laws typically focus on specific, identifiable risks in high-impact areas, such as the use of AI in government, critical infrastructure, or to prevent discriminatory outcomes. Instead of demanding a universal “kill switch,” they often emphasize:
* **Transparency and Disclosure:** Requiring developers to share information about model capabilities and limitations.
* **Risk Assessment:** Mandating assessments for AI systems used in sensitive applications.
* **Accountability:** Establishing clear lines of responsibility for AI deployment and its impacts.
* **Specific Use Cases:** Addressing the risks of AI in a limited, manageable context rather than regulating the entire technology.
This pragmatic strategy allows for legislation that is more palatable to industry, easier to implement, and demonstrates a commitment to responsible innovation without stifling it entirely. By moving from a broad, pre-emptive “regulate everything” to a focused “manage specific risks” approach, California has found a more effective path to embedding AI safety into law.
