Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California bill From Wikipedia, the free encyclopedia
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, was a failed[1] 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist".[2] Specifically, the bill would have applied to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations.[3] SB 1047 would have applied to all AI companies doing business in California—the location of the company would not matter.[4] The bill would have created protections for whistleblowers[5] and required developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also have established CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | |
---|---|
![]() | |
California State Legislature | |
Full name | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
Introduced | February 7, 2024 |
Assembly voted | August 28, 2024 (48–16) |
Senate voted | August 29, 2024 (30–9) |
Sponsor(s) | Scott Wiener |
Governor | Gavin Newsom |
Bill | SB 1047 |
Website | Bill Text |
Status: Not passed (Vetoed by Governor on September 29, 2024) |
Background
Summarize
Perspective
The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to become concerned existential risks associated with increasingly powerful AI systems.[6][7] For example, hundreds of tech executives and AI researchers signed a statement on AI in May 2023 that called for it to be a "global priority" similar to "pandemics and nuclear war."[8] However, the plausibility of this threat is still widely debated.[9]
Strong regulation of AI has been criticized for purportedly causing regulatory capture by large AI companies like OpenAI, a phenomenon in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general.[7] Other advocates of AI regulation aim to prevent bias and privacy violations, rather than existential risks.[7] For example, some experts who view existential concerns as overblown and unrealistic view them as a distraction from near-term harms of AI like discriminatory automated decision making.[10]
In the face of existential concerns, technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit.[11][12]
In 2023, not long before the bill was proposed, Governor Newsom of California and President Biden issued executive orders on artificial intelligence.[13][14][15] State Senator Wiener said SB 1047 draws heavily on the Biden executive order, and is motivated by the absence of unified federal legislation on AI safety.[16] Historically, California has passed regulation on several tech issues itself, including consumer privacy and net neutrality, in the absence of action by Congress.[17][18]
History
Summarize
Perspective
Proposal and voting
The bill was originally drafted by Dan Hendrycks, co-founder of the Center for AI Safety, who has previously argued that evolutionary pressures on AI could lead to "a pathway towards being supplanted as the Earth's dominant species."[19][20] The center issued a statement in May 2023 co-signed by hundreds of AI researchers and business leaders stating that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[21]
State Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023.[22][23][24] On February 7, 2024, Wiener introduced SB 1047.[25][26]
On May 21, SB 1047 passed the Senate 32-1.[27][28] The bill was significantly amended by Wiener on August 15, 2024 in response to industry advice.[29] Amendments included adding clarifications, and removing the creation of a "Frontier Model Division" and the penalty of perjury.[30][31]
On August 28, the bill passed the State Assembly 48-16. Then, due to the amendments, the bill was once again voted on by the Senate, passing 30-9.[32][33]
Veto by governor
On September 29, Governor Gavin Newsom vetoed the bill.[34] The deadline for California lawmakers to overrule Newsom's veto (30 November 2024) has now passed.[1]
Newsom cited concerns over the bill's regulatory framework targeting only large AI models based on their computational size, while not taking into account whether the models are deployed in high-risk environments.[35][36] Newsom emphasized that this approach could create a false sense of security, overlooking smaller models that might present equally significant risks.[35][37] He acknowledged the need for AI safety protocols[35][38] but stressed the importance of adaptability in regulation as AI technology continues to evolve rapidly.[35][39]
Governor Newsom also committed to working with technology experts, federal partners, and research institutions, including the Carnegie Endowment for International Peace, led by former California Supreme Court Justice Mariano-Florentino Cuéllar; and Stanford University's Human-Centered AI (HAI) Institute, led by Dr. Fei-Fei Li. He announced plans to collaborate with these entities to advance responsible AI development, aiming to protect the public while fostering innovation.[35][40]
Provisions
Summarize
Perspective
SB 1047 would have covered AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million.[3][41] If a covered model is fine-tuned using more than $10 million, the resulting model would also have been covered.[31]
Developers of covered models and derivatives would have been required to submit a certification, subject to auditing, before training models. The certification would have shown mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Safeguards to reduce risk included the ability to shut down the model,[5] which has been variously described as a "kill switch"[42] and "circuit breaker".[43] Whistleblowing provisions protect employees who report safety problems and incidents.[5]
The bill would have defined critical harms with respect to four categories:[2][44]
- Creation or use of a chemical, biological, radiological, or nuclear weapon[45]
- Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
- Autonomous crimes causing mass casualties or at least $500 million of damage
- Other harms of comparable severity
Additionally, SB 1047 would have created a public cloud computing cluster called CalCompute, associated with the University of California, to support startups, researchers, and community groups that lack large-scale computing resources.[30]
Compliance and supervision
SB 1047 would have required developers, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided.[30] The Government Operations Agency would have reviewed the results of safety tests and incidents, and issue guidance, standards, and best practices.[30] The bill would have created a Board of Frontier Models to supervise the application of the bill by the Government Operations Agency. It is would be composed of 9 members.[30]
Reception
Summarize
Perspective
Subjects of debate
Proponents of the bill described its provisions as simple and narrowly-focused, with Sen. Scott Weiner describing it as a "light-touch, basic safety bill".[46] This was disputed by critics of the bill, who described the bill's language as vague and criticized it as consolidating power in the largest AI companies at the expense of smaller ones.[46] Proponents, in turn, argued that the bill only applies to models trained using more than 1026 FLOPS and with over $100 millions, or fine-tuned with more than $10 millions, and that the threshold could be increased if needed.[47]
The penalty of perjury was also a subject of debate, and was eventually removed through an amendment. The scope of the "kill switch" requirement was also reduced, following concerns from open-source developers. The use of the term "reasonable assurance" in the bill was also controversial, and it was eventually amended to "reasonable care". Critics then argued that "reasonable care" imposed an excessive burden by requiring confidence that models could not be used to cause catastrophic harm; proponents claimed that the standard did not require certainty and that it already applied to AI developers under existing law.[47]
Support and opposition
Individual supporters of the bill included Turing Award recipients Yoshua Bengio[48] and Geoffrey Hinton,[49] Elon Musk,[50] Bill de Blasio,[51] Kevin Esvelt,[52] Dan Hendrycks,[53] Vitalik Buterin,[54] OpenAI whistleblowers Daniel Kokotajlo[45] and William Saunders,[55] Lawrence Lessig,[56] Sneha Revanur,[57] Stuart Russell,[56] Jan Leike,[58] actors Mark Ruffalo, Sean Astin, and Rosie Perez,[59] Scott Aaronson,[60] and Max Tegmark.[61] Over 120 Hollywood celebrities, including Mark Hamill, Jane Fonda, and J. J. Abrams, also signed a statement in support of the bill.[62] Max Tegmark likened the bill's focus on holding companies responsible for the harms caused by their models to the FDA requiring clinical trials before a company can release a drug to the market.[61]
Organizations sponsoring the bill included the Center for AI Safety, Economic Security California and Encode Justice.[63] The labor union SAG-AFTRA and two women's groups, the National Organization for Women and Fund Her, sent support letters to Governor Newsom.[64] The Los Angeles Times editorial board also wrote in support of the bill.[65]
Individual opponents of the bill included Andrew Ng, Fei-Fei Li,[66] Russell Wald,[67] Ion Stoica, Jeremy Howard, Turing Award recipient Yann LeCun, and U.S. Congressmembers Nancy Pelosi, Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and Lou Correa.[7][68][69] Andrew Ng called for more targeted regulatory approaches, such as the targeting of deepfake pornography, the watermarking of generated materials, and investment in red teaming and other security measures.[70]
The University of California and Caltech researchers also wrote open letters in opposition.[68]
Industry
The bill was opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress,[a] the Computer & Communications Industry Association[b] and TechNet.[c][3] Companies including Meta[74] and OpenAI[75] were opposed to or raised concerns about the bill, while Google,[74] Microsoft and Anthropic[61] proposed substantial amendments.[4] However, Anthropic announced its support for an amended version of the bill while mentioning that some aspects of the bill which they said seemed concerning or ambiguous to them.[76] Several startup founder and venture capital organizations opposed to the bill, for example, Y Combinator,[77][78] Andreessen Horowitz,[79][80][81] Context Fund[82][83] and Alliance for the Future.[84]
After the bill was amended, Anthropic CEO Dario Amodei wrote that "the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."[85] xAI CEO Elon Musk also supported the bill.[86] On September 9, 2024, at least 113 current and former employees of AI companies OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter to Governor Newsom in support of SB 1047.[87][88]
Open source developers
Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, the Chief AI Officer of Meta, has suggested the bill would kill open source AI models.[70] There were concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available.[89][90] The AI Alliance wrote in opposition to the bill, among other open-source organizations.[68] In contrast, Creative Commons co-founder Lawrence Lessig wrote that SB 1047 would make open source AI models safer and more popular with developers, since both harm and liability for that harm would be less likely.[43]
Public opinion polls
Summarize
Perspective
The Artificial Intelligence Policy Institute, a pro-regulation AI think tank,[91][92] ran three polls of California respondents on whether they supported or opposed SB 1047.[93][94][95][96][97][98] The third poll asked the question "Some policy makers are proposing a law in California, Senate Bill 1047, which would require that companies that develop advanced AI conduct safety tests and create liability for AI model developers if their models cause catastrophic harm and they did not take appropriate precautions."[99] The options were "Support", "Oppose", and "Not Sure".[93][94] Their poll results were 53.8–64.2% support in July,[93][94] 60.1–69.9% support in early August,[95][96] and 65.8–74.2% support in late August.[97][98]
On the other side of the aisle, the California Chamber of Commerce conducted its own poll, showing that 28 % of respondents supported the bill, 46 % opposed, and 26 % were neutral. The framing of the question has however been described as "badly biased".[92] The summary of the bill in their question was "Lawmakers in Sacramento have proposed a new state law—SB 1047—that would create a new California state regulatory agency to determine how AI models can be developed. This new law would require small startup companies to potentially pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats. Some say burdensome regulations like SB 1047 would potentially lead companies to move out of state or out of the country, taking investment and jobs away from California."[100]
A YouGov poll commissioned by the Economic Security Project, which co-sponsored the bill, found that 78% of registered voters across the United States supported SB 1047, and 80% thought that Governor Newsom should sign the bill.[101] Their question was "The California legislature passed a bill recently to regulate artificial intelligence, or AI, and since so many AI companies are based there, it could have national impacts.The bill would require California companies developing the next generation of most powerful AI systems to test for safety risks before releasing them. If testing shows that the AI system could be used to cause catastrophic harm to society, such as disrupting the financial system, shutting down the power grid, or creating biological weapons, the company must add reasonable safeguards to prevent these risks. If the company fails to test or adopt reasonable safeguards, they could be held accountable by the Attorney General of California."[101]
A David Binder Research poll commissioned by the Center for AI Safety, a group focused on mitigating societal-scale risk and a sponsor of the bill, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations.[102][103][104][105] Their question was "The proposal would require California companies developing the next generation of most powerful AI systems to test for safety risks before releasing them. If testing shows that the AI system could be used to cause catastrophic harm to society, such as disrupting the financial system, shutting down the power grid or creating biological weapons, the company must add reasonable safeguards to prevent these risks. If the company fails to test or adopt reasonable safeguards, they could be held accountable by the Attorney General of California."[102]
See also
Notes
References
External links
Wikiwand - on
Seamless Wikipedia browsing. On steroids.