Anthropic is making its catastrophic risk management public, detailing exactly how it plans to achieve SB 53 compliance in California. The move effectively turns years of voluntary safety commitments into a mandatory regulatory floor, and the company is now using the state law as a blueprint for federal action.
Anthropic, one of the leading developers of frontier AI models, has published its comprehensive plan for meeting California’s new safety and transparency requirements, the Transparency in Frontier AI Act (SB 53).
The law, which takes effect on January 1, 2026, establishes the nation’s first mandatory safety and transparency standards specifically targeting catastrophic risks posed by the most powerful AI systems. Anthropic’s new document, the Frontier Compliance Framework (FCF), details how the company assesses and mitigates threats ranging from cyber offense and bioweapons to the existential risks of AI sabotage and loss of control.
While the company has long operated under its voluntary Responsible Scaling Policy (RSP)—a detailed internal guide for managing extreme risks—the FCF marks a significant shift. It formalizes those practices into a public, legally binding compliance document.
Anthropic endorsed SB 53, arguing that mandatory transparency is necessary to ensure that safety commitments aren't quietly abandoned as competition heats up or models become more capable. The FCF outlines a tiered system for evaluating model capabilities against specific risk categories and explains the company’s approach to protecting proprietary model weights and responding to safety incidents.
For the industry, this is a crucial moment. What was once considered a "best practice" among the largest labs—like publishing detailed System Cards summarizing testing and risk assessments—is now a legal requirement in the world’s fifth-largest economy.
SB 53 compliance as a federal blueprint
The immediate goal for Anthropic is SB 53 compliance, but the long-term strategy is clearly national. The company is leveraging its state-level compliance framework to push for a consistent federal standard, arguing that patchwork state regulation creates instability.
Anthropic’s proposed federal framework mirrors the core tenets of SB 53, emphasizing public visibility without locking developers into specific technical approaches that might quickly become outdated.
The core demands for a national standard include: requiring covered developers to publish a secure development framework (like the FCF) detailing how they mitigate chemical, biological, nuclear, and misalignment risks; mandating the public release of System Cards upon deployment; and enshrining robust whistleblower protections for employees who raise concerns about compliance violations.
Crucially, Anthropic advocates that these requirements should only apply to the largest frontier developers building the most capable models. This limitation is designed to prevent unnecessary regulatory burdens from stifling the startup ecosystem and smaller developers whose models pose a lower risk of catastrophic harm.
The company’s stance is clear: SB 53 sets the necessary floor for transparency, ensuring that safety commitments are enforceable. But for safety to be truly effective and consistent, Congress needs to adopt a national framework that enshrines these practices, ensuring that developers across the country operate under the same high standards. As AI systems grow more powerful, the public deserves visibility into the safeguards in place, and Anthropic is betting that mandatory transparency is the only way to guarantee it.
***


