Under Big Tech pressure, Newsom vetoes California child AI protections; fight moves to the ballot

Newsom vetoed AB 1064, signaling California will favor narrower AI rules over bans for minors amid tech lobbying. Legal teams should expect disclosures, age checks, and audits.

Categorized in: AI News Legal
Published on: Nov 09, 2025
Under Big Tech pressure, Newsom vetoes California child AI protections; fight moves to the ballot

California's AI Safety Standoff: What Gov. Newsom's Veto Means for Legal Teams

Tech companies poured millions into Sacramento, warned of relocations, and got results. Gov. Gavin Newsom vetoed AB 1064, a bill aimed at curbing harmful conduct by companion chatbots used by minors, citing concerns that the measure would sweep too broadly and block youth access to AI.

The move signals a hard pivot: California wants growth in AI while keeping new restrictions narrow. For in-house counsel, the takeaways are clear-expect narrower, disclosure- and process-oriented rules to advance, and broader conduct bans to face steep political headwinds unless voters step in.

What AB 1064 Would Have Done

AB 1064 would have barred companion chatbot operators from offering these systems to minors unless they were not "foreseeably capable" of harmful conduct, including encouraging self-harm. The governor agreed with the goal but called the guardrail too blunt, arguing it could cut off minors from useful tools and education.

His message: "We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether."

Why the Veto Happened: Pressure and Policy

Industry groups went on offense. Ads warned the bill would slow innovation and hurt students. Lobbyists emphasized that companies could take jobs and capital to other states if California overreached.

Robert Boykin of TechNet summed up the industry position: California should "strike a better balance between protecting consumers and enabling responsible technological growth." Child safety groups called the veto a direct result of tech pressure-and promised to bring the issue to voters via ballot initiative.

The Lobbying Math

From January to September, the California Chamber of Commerce spent $11.48 million lobbying on multiple bills. Meta spent $4.13 million, including $3.1 million paid to the California Chamber. Google spent $2.39 million. Amazon, Uber, and DoorDash each crossed $1 million. TechNet's spend was around $800,000.

These numbers were designed to send a message: regulate aggressively and risk an exit. Lawmakers heard it.

AG Bonta, OpenAI, and Charity Oversight

Attorney General Rob Bonta signaled no opposition to OpenAI's restructuring that gives its nonprofit parent a stake in the for-profit public benefit corporation-clearing a path to a potential listing. Bonta cited a commitment from OpenAI to stay in California and emphasized safety priorities.

The AG's office oversees charitable assets and trusts used for public benefit. This review posture matters for any hybrid nonprofit/for-profit AI structure seeking to defend charitable status and governance choices. For reference: California DOJ Charitable Trusts.

What Passed, What Didn't

Signed:

  • AB 56: Platforms must display labels warning minors about social media's mental health harms.
  • SB 53: More transparency about AI safety risks and stronger whistleblower protections.
  • SB 243: Requires chatbot operators to implement procedures to prevent suicide or self-harm content, though advocacy groups say late changes weakened it.

Vetoed:

  • AB 1064: Companion chatbot guardrails for minors, rejected as too broad.
  • SB 7 ("No Robo Bosses Act"): Would have required notice before deploying automated decision systems in employment decisions. The governor called it overly broad-a disappointment to workplace fairness advocates who see AI misuse risks rising.

Ballot Initiative Threat-and Opportunity

Expect the fight to move outside the Capitol. Child safety advocates filed a statewide ballot initiative to restore guardrails similar to AB 1064. Bipartisan interest is high amid lawsuits alleging chatbots contributed to teen suicides.

If the initiative qualifies and passes, compliance timelines and enforcement authority could shift quickly. Counsel should stress-test policies now rather than scramble later.

Legal Risk Profile You Should Prepare For

  • Product design duty of care: The "foreseeably capable" standard from AB 1064 isn't law-yet. But plaintiffs will cite it as a benchmark in negligence and unfair practices claims.
  • First Amendment and vagueness challenges: Any future restrictions on model outputs around self-harm and sensitive content will draw scrutiny on precision, overbreadth, and speaker-based rules.
  • Youth access vs. safety: Age assurance and parental controls will be a practical middle ground. Expect pressure for verifiable procedures rather than blanket bans.
  • Charitable and PBC scrutiny: Hybrid structures integrating nonprofit missions with for-profit entities will face ongoing AG oversight on governance, mission lock, and use of charitable assets.
  • Preemption and Section 230 questions: Claims tied to AI-generated content and recommendation logic will keep testing the boundaries of platform immunity and state police powers.

Compliance Moves to Make Now

  • Ship a youth-safety program: Age assurance, parental controls, default-safe modes for minors, crisis-language filters, and escalation paths for self-harm content-documented and testable.
  • Clarify "companion" vs. "general purpose": Label use-cases, tune guardrails by context, and lock policies for chatbots marketed for emotional support.
  • Red-team for self-harm prompts: Maintain evidence of evaluations, variant testing, and patch cadence. Tie mitigations to documented risk thresholds.
  • Governance and audit trails: Keep model cards, change logs, and policy diffs. Whistleblower protections now matter under SB 53.
  • Employment AI readiness: Even with SB 7 vetoed, stand up a notice-and-appeal process for hiring and promotion tools. Bias testing and vendor contracts should reflect it.
  • Ballot initiative scenario planning: Map controls to likely statutory language, budget for implementation, and pre-draft public disclosures.

What to Watch Next

  • Ballot initiative language and signature pace.
  • New session drafts narrowing scope to procedure, transparency, and age assurance rather than categorical bans.
  • Litigation challenging weakened or vetoed provisions through unfair competition or wrongful death theories.
  • Further AG guidance on nonprofit/for-profit AI structures and charitable assets.

Primary Source Checkpoints

The bottom line: California is blessing disclosure-and-process frameworks while punting on broad conduct bans-unless voters impose them. Legal teams that build youth-safety controls, employment AI notices, and clean audit trails now will be ready for either path.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide