Anthropic Publishes Compliance Framework for California's AI Transparency Law

Anthropic has released its Frontier Compliance Framework to meet California's new AI transparency law (SB 53), which mandates frontier AI developers disclose how they assess and manage catastrophic risks. The company advocates for federal legislation to establish consistent nationwide standards for AI safety and transparency.

anthropic Dec 19, 2025

California's Transparency in Frontier AI Act (SB 53) takes effect on January 1, establishing the country's inaugural requirements for frontier AI safety and transparency regarding catastrophic risks.

Despite Anthropic's preference for federal regulations, the company supported SB 53 as it believes frontier AI developers should demonstrate transparency in risk assessment and management. The legislation creates a balance by mandating robust safety practices, incident reporting, and whistleblower protections, while maintaining implementation flexibility and exempting smaller entities from excessive regulatory requirements.

Anthropic's Frontier Compliance Framework Details

A central requirement of the legislation mandates frontier AI developers to publish frameworks detailing their catastrophic risk assessment and management approaches. Anthropic's Frontier Compliance Framework (FCF) is now publicly accessible.

The FCF outlines Anthropic's approach to evaluating and mitigating risks related to cyber offense, chemical, biological, radiological, and nuclear threats, alongside AI sabotage and control loss concerns for frontier models. The document presents Anthropic's tiered evaluation system for model capabilities across risk categories and describes mitigation strategies. Model weight protection and safety incident response protocols are also included.

The FCF builds upon practices Anthropic has implemented over several years. Since 2023, Anthropic's Responsible Scaling Policy (RSP) has guided the company's extreme risk management approach for advanced AI systems, influencing development and deployment decisions. Anthropic also publishes comprehensive system cards with new model releases, detailing capabilities, safety evaluations, and risk assessments. While other laboratories have voluntarily implemented comparable practices, the new California law makes such transparency mandatory for developers of the most powerful AI systems.

The FCF will function as Anthropic's compliance framework for SB 53 and additional regulatory requirements going forward. The RSP will continue as Anthropic's voluntary safety policy, representing what the company considers best practices as AI advances, even when exceeding or differing from existing regulatory requirements.

The Case for Federal Standards

SB 53's implementation represents a significant milestone. Through formalizing practical transparency practices that responsible laboratories already voluntarily implement, the legislation prevents these commitments from being quietly discarded as models become more capable or competition increases. A federal AI transparency framework is now essential to establish nationwide consistency.

Anthropric presented a federal legislation framework proposal earlier this year, prioritizing public visibility of safety practices without mandating specific technical approaches that might become obsolete. The framework's fundamental principles include:

  • Mandatory public secure development framework: Covered developers must publish frameworks explaining their serious risk assessment and mitigation approaches, encompassing chemical, biological, radiological, and nuclear threats, plus risks from misaligned model autonomy.

  • Deployment system card publication: Public disclosure of documentation detailing testing, evaluation procedures, results, and mitigations should occur at model deployment and following substantial modifications.

  • Whistleblower safeguards: Legal violations should explicitly include laboratories providing false compliance information about their frameworks or penalizing employees who report violations.

  • Adaptive transparency standards: Effective AI transparency frameworks require baseline standards enabling security and public safety enhancement while accommodating AI development's changing landscape. Standards must be adaptable, minimal requirements that can evolve alongside emerging best practices.

  • Restricted scope to major model developers: Requirements should target only established frontier developers creating the most capable models, avoiding unnecessary burdens on startups and smaller developers whose models pose minimal catastrophic harm risks.

As AI systems become increasingly powerful, public visibility into development processes and protective measures becomes essential. Anthropic anticipates collaborating with Congress and the administration to establish a national transparency framework that maintains safety while supporting America's AI leadership position.