Key point: California enacts first-in-the-nation law focused on regulating frontier artificial intelligence models.
On September 29, 2025, California Governor Gavin Newsom signed SB 53 — the Transparency in Frontier Artificial Intelligence Act (TFAIA) — into law. As explained in the Senate floor analysis, the law “requires large artificial intelligence (AI) developers . . . to publish safety frameworks, disclose specified transparency reports, and report critical safety incidents to the Office of Emergency Services (OES).” The law also “creates enhanced whistleblower protections for employees reporting AI safety violations and establishes a consortium to design a framework for ‘CalCompute,’ a public cloud platform to expand safe and equitable AI research.” The law was hailed by both Newsom and its primary sponsor, Senator Scott Wiener, as striking a proper balance between innovation and placing sensible guardrails on frontier AI models.
Below, we provide a brief summary of some of the law’s more notable provisions.
Background
TFAIA is the successor to last year’s SB 1047 (the Safe and Secure Innovation for Frontier AI Models Act). SB 1047 passed the California legislature but was vetoed by Newsom over fears that it would negatively impact California’s AI economy. Prior to the veto, the bill caused significant controversy in Silicon Valley. At the same time as his veto, the governor announced a workgroup to consider further legislative efforts. The workgroup released its report in March. TFAIA passed both legislative chambers by wide margins — 59 to 7 in the Assembly and 29 to 8 in the Senate — and was quickly signed into law by Newsom.
Scope of Law
TFAIA applies to “large frontier developers.” As a starting point, the law defines frontier developers as persons who have trained, are training, or initiated the training of a foundation AI model using, or intending to use, a quantity of computing power greater than 10^26 integer or floating-point operations in its overall development and modification process. In turn, “large frontier developer” is defined as a frontier developer who, along with affiliates, collectively had an annual gross revenue of more than $500 million in the preceding year.
The computing power and annual gross revenue thresholds likely limit the law’s applicability to global-scale tech corporations. This stance can be inferred by TFAIA acknowledging the “wide range of benefits for Californians and the California economy” that AI development brings and stating that “foundation models developed by smaller companies may [raise a need for] additional legislation . . . at that time.” TFAIA also states that it aims to regulate “catastrophic risk,” which is defined as risk that will materially contribute to the death or serious injury of more than 50 people or more than $1 billion in damages to property from a single incident.
Transparency Obligations
AI Framework
The law establishes a set of obligations for large frontier developers, requiring them to adopt and disclose safety protocols designed to reduce the chance of catastrophic risks. Specifically, large frontier developers must publish a detailed “frontier AI framework” on their website, which must include, among other things, risk assessments for catastrophic scenarios, steps taken to mitigate those risks, internal governance structures, cybersecurity measures to protect unreleased model weights, and plans for incident response. Moreover, the framework must explain how the large frontier developer incorporates national and international standards and applies industry best practices. Developers are also expected to use third-party evaluators to test whether their mitigation strategies are effective.
The act involves more than a one-time disclosure of this framework; large frontier developers must update their frameworks annually, and any changes must be made public within 30 days.
The law also prohibits large frontier developers from making false or misleading claims regarding catastrophic risk or their adherence to their published framework. Further, the law permits large frontier developers to make redactions from their public disclosures that are necessary to protect their trade secrets, cybersecurity interests, public safety, or national security. However, developers must keep unredacted versions of these records for at least five years.
Transparency Report
Whenever a new or substantially altered frontier model is deployed, the developer must release a transparency report outlining the model’s uses, limitations, or restrictions, and its catastrophic risk assessment.
Reporting Obligations
Beyond the publication requirements, TFAIA mandates regular communication with government authorities. Large frontier developers are required to submit summaries of their catastrophic risk assessments to the OES every three months or according to a designated schedule.
Critical Safety Reporting
Under TFAIA, the OES will establish a mechanism through which a frontier developer must report a critical safety incident within 15 days of the occurrence. If a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer must disclose that incident to a government authority within 24 hours. Additionally, the OES will establish a mechanism for a large frontier developer to confidentially submit assessment reports of potential for catastrophic risk.
Recommendations for Updates
TFAIA requires annual updates to allow the statute to keep pace with technological development. Starting January 1, 2027, the Department of Technology will be required to annually assess technological development, scientific research, and international standards to update the definitions of “frontier model,” “frontier developer,” and “large frontier developer.” Beginning January 1, 2027, the California attorney general will also be required to submit an annual report to inform the legislature and governor, covering anonymized and aggregated information from covered whistleblowers.
Enforcement
If a large frontier developer (1) fails to publish or transmit a compliant document required to be published or transmitted by the law, (2) makes a false or misleading statement in violation of the law’s prohibition on same, (3) fails to report an incident as required by the law, or (4) fails to comply with its own frontier AI framework, the state attorney general can seek a civil penalty up to $1 million per violation.
CalCompute
TFAIA establishes a consortium to design a framework for “CalCompute,” a public cloud computing cluster aimed at advancing safe and equitable AI. Upon appropriation, the consortium will include a fully owned cloud platform with necessary expertise for operation and support. By January 1, 2027, the consortium must submit a comprehensive report to the legislature detailing the framework, costs, governance, and workforce recommendations for CalCompute’s creation and operation.
Whistleblower Protections
TFAIA prohibits frontier developers from preventing or retaliating against employees responsible for assessing, managing, or addressing risk of critical safety incidents from disclosing their reasonable belief that the frontier developer’s activities pose substantial danger to the public health or safety, or violate the law. Frontier developers also must provide clear written notice to all of their employees of this right as whistleblowers at least once a year. Large frontier developers have the additional compliance requirement of establishing a reasonable internal process through which a whistleblower may submit good faith information on any risks to notify the large frontier developer’s officers and directors, who are to review it at least once per quarter. A whistleblower is further protected by being allowed to bring a civil case under the law against a frontier developer for injunctive relief, and, in a successful case, even recover attorneys’ fees.
*Courtney Le also contributed to this article. She is not licensed to practice law in any jurisdiction; bar admission pending.