California's AI Playbook Is Becoming the Reference Point - With States Writing Their Own Edits
Federal action on AI remains stalled. States aren't waiting. California has moved first with a cluster of AI laws, and other jurisdictions are picking up the core ideas while adjusting them to their own politics and enforcement preferences.
The result: a patchwork that pulls from California's model but doesn't mirror it. For in-house counsel and law firm teams, the work now is to track where the rules align, where they diverge, and how enforcement will actually hit your clients or business.
What California Passed - And Why It Matters
SB 53, the Transparency in Frontier Artificial Intelligence Act, targets the most advanced systems. It pushes safety protocols, red-teaming, incident reporting, and whistleblower protections before high-risk models go broad. One major developer, Anthropic, publicly supported it after earlier debates over scope.
Two companion moves focus on minors and data. SB 243 adds safeguards for AI chatbots that interact with children. AB 1043 brings stricter age verification to certain platforms. And AB 566 puts the onus on browsers to offer clear opt-out paths for data collection - a signal that infrastructure-layer players are squarely in scope.
Where Other States Are Borrowing - And Breaking Away
New York is advancing bills that echo SB 53's safety documentation and incident reporting requirements, and it's probing how agencies use AI in benefits decisions. Colorado has taken a stricter stance on discrimination risk, passing the Colorado AI Act and delaying implementation to give industry time to adjust.
Even Texas moved. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) set up an AI Council and limits certain business and government uses, but consolidates enforcement in the attorney general's office and avoids private rights of action. Red-state bills often pull ideas first floated in California, while easing private litigation exposure.
Separately, several states are addressing voice, likeness, and elections. Tennessee's ELVIS Act tackles unauthorized AI-generated voice and image use. Wisconsin and Texas require disclosures for AI in political communications, a theme California pushed early through deepfake transparency efforts.
Expect More States To Act
Industry and policy leaders see momentum building. One state policy manager noted that New York is starting to require safety protocols and risk-mitigation disclosures similar to California's approach. Education-focused founders expect more jurisdictions to set standards for child-directed tools and AI used as mental health substitutes, following early limits adopted in Illinois, Nevada, and Utah.
Litigation Pressure Points
California's assertive posture invites challenges. AB 566 forces browser-level opt-outs that could affect users nationwide, raising extraterritoriality and Dormant Commerce Clause questions. Other bills pushed up against First Amendment boundaries, which likely contributed to vetoes - including AB 1064, which aimed at kids' AI ethics but swept so broadly it could have effectively kept minors off AI tools altogether.
These fights will dictate how far a single state can push practices that spill across borders. Expect filings that test preemption, speech, and due process theories, plus emphasis on centralized state enforcement versus private actions.
Federal Outlook
Congress is still debating, but a key signal came this summer: the U.S. Senate voted 99-1 to remove a proposed five-year pause on state AI enforcement from the "One Big Beautiful Bill Act." Translation: states are clear to keep moving, and there's no near-term federal umbrella to simplify compliance.
Practical Implications For Legal Teams
- Map your exposure: inventory internal and vendor AI by capability (frontier-level vs. narrower systems), user group (including minors), and deployment context (consumer, enterprise, government).
- Build core artifacts early: safety policies, red-teaming protocols, incident response plans, and model cards or system summaries that can satisfy SB 53-style requests.
- Tighten data governance: implement user-friendly opt-outs, log consent, and be ready for browser-level expectations under AB 566. Confirm how signals propagate across products.
- Child-directed safeguards: verify age gates where required, apply stricter content filters for minors, and document escalation processes for sensitive use cases (education, wellness, counseling).
- Bias and discrimination: if operating in Colorado or selling into it, align with the Colorado AI Act's risk controls and documentation obligations for high-risk uses.
- Elections and synthetic media: prepare disclosures for political content, watermarking or provenance where feasible, and rapid takedown paths for deceptive media claims.
- Voice and likeness: implement consent, takedown, and licensing workflows to reduce ELVIS Act exposure and similar state claims.
- Contract for compliance: push vendors to warrant compliance with SB 53-like obligations, share safety documentation, and notify you of incidents within defined timeframes.
- Tune for enforcement model: states like Texas centralize enforcement with the AG; California allows more private action in areas like child protections. Adjust playbooks accordingly.
- Board oversight and reporting: brief on multi-state risk, likely litigation vectors, and staged compliance roadmaps that cover California, Colorado, New York, and Texas first.
Key Bills To Track
- California SB 53 (Transparency in Frontier AI Act) - safety protocols, incident reporting, whistleblower protections. California Legislative Information
- Colorado AI Act (SB24-205) - discrimination risk controls and delayed implementation. Colorado General Assembly
Bottom Line
California set the early frame. Other states are adopting the parts they like and rewriting the rest. Without federal preemption, this patchwork is your new normal.
If your company builds, buys, or deploys AI, treat California, Colorado, New York, and Texas as first-tier jurisdictions for policy, contracts, and audits. Get your documentation in order before the subpoenas and RFIs arrive.
If your legal team needs cross-functional training to keep pace with these requirements, you can find curated AI courses by role here: Complete AI Training - Courses by Job
Your membership also unlocks: