Top Qs
Timeline
Chat
Perspective

Transparency in Frontier Artificial Intelligence Act

California law From Wikipedia, the free encyclopedia

Transparency in Frontier Artificial Intelligence Act
Remove ads

The Transparency in Frontier Artificial Intelligence Act, also referred to as SB-53, is a 2025 California law which mandates increased transparency for companies building artificial intelligence.[1] SB-53 is primarily focused on assessing and reducing potential catastrophic risks from AI, and is the first bill addressing such risks to be passed into law in America.[2]

Quick facts California State Legislature, Full name ...

The bill requires companies to create publicly accessible documents assessing potential "catastrophic risk[s]"[3] from their AI models, as well as publishing documentation on how the model incorporates national and international safety standards. SB-53 also sets up whistleblower protections and procedures for alerting the government to a "critical safety incident".[4][5]

Remove ads

History

SB-53 was preceded in 2024 by the unsuccessful Safe and Secure Innovation for Frontier Artificial Intelligence Models Act ("SB-1047"), a proposed bill authored by Senator Scott Wiener which was vetoed by Governor Gavin Newsom.[6] Afterwords, Newsom created a "Joint California AI Policy Working Group" to provide recommendations for AI regulation, which guided the drafting of SB-53.[4]

Senator Scott Wiener introduced the bill on January 7, 2025, and after a series of amendments, SB-53 passed the Senate 29-8 on September 13. Governor Gavin Newsom approved the bill on September 25, passing it into law.

Remove ads

Provisions

Summarize
Perspective

SB-53 applies primarily to companies making at least $500 billion in yearly gross revenue. It defines a “frontier model” as any AI trained with over 1026 FLOPS (including fine-tuning), including unreleased internal models. Both financial and computational thresholds must be met before most of the law is applied, although the threshold can be lowered or otherwise updated by the California Department of Technology in an annual review starting in 2027. Most of the bill's provisions are focused on "catastrophic risks" from AI, which are defined as incidents in which a model contributes to more than 50 deaths or serious injuries, or causes more than $1,000,000,000 in economic damage from AI-assisted acts (such as cyberattacks or the creation of biological weapons).[2]

The bill requires companies to provide publicly accessible safety frameworks for frontier AI models, listing how the company tests for catastrophic risk from its AI, and how it implements protections against such risks. This includes addressing the possibility that the AI may attempt to circumvent internal guardrails or oversight mechanisms, in which case any such incident must be reported to the California Office of Emergency Services (OES). (Certain safety incidents, such as dangerously deceptive model behavior, physical injury, or death, must be reported to OES within 15 days, unless the incident poses imminent physical risk, in which case it must be reported immediately.) The company must follow its published framework, and if any changes are made, the framework should be updated within 30 days, and justification for said changes must also be made public.

Additionally, all companies (even smaller ones) are required to publish basic information about newly released models (such as terms of service, supported languages, and intended use), although only large companies (making over $500,000,000,000) need to publish full safety frameworks.

SB-53 also establishes various whistleblower protections for covered employees. Large companies must have anonymous whistleblowing channels in place which protect employees from retaliation from reporting risks to state or federal authorities if they have reasonable cause to believe that their employer is substantially risking public health and safety.[2]

Remove ads

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads