California Governor Gavin Newsom's veto of SB 1047 in September 2024 has reignited the debate over how best to regulate artificial intelligence (AI) in the state and beyond. The bill, which aimed to hold AI developers accountable for severe harm caused by their technologies, faced significant pushback from tech firms and some lawmakers who argued it could stifle innovation [d64b1fc8].
In a recent commentary, Martin Casado, a general partner at Andreessen Horowitz, advocates for an evidence-based approach to AI policy rather than one driven by existential fears. He argues that the focus should be on the marginal risks introduced by AI, rather than broad, fear-based regulations that could hinder technological advancements [ad3d0321]. Casado highlights that historical context shows that spurious regulations can weaken security and that AI has already proven to be safe and beneficial across various fields [ad3d0321].
Following the veto of SB 1047, which sought to establish liability for catastrophic events caused by AI, Newsom indicated his intention to consult with experts to develop a more nuanced regulatory framework. He described the legislation as 'well-intentioned' but overly stringent, suggesting that it could create a 'false sense of security' regarding AI safety [d64b1fc8].
Senator Scott Wiener, the bill's author, expressed disappointment, calling the veto a 'missed opportunity' for accountability in the rapidly evolving AI landscape. The bill's supporters, including notable figures like Elon Musk, believed it was essential for ensuring that AI technologies would not cause severe harm [d64b1fc8].
Despite the setback with SB 1047, Newsom did sign SB 896, which regulates AI use by state agencies, indicating a commitment to addressing AI safety through different legislative means [d64b1fc8]. As hundreds of AI bills are pending across U.S. statehouses, the conversation continues about how best to balance innovation with safety [ad3d0321].