Opt out or buy in? State AI labels put the decision back in your hands

States are moving first on AI labels-Utah and California require clear disclosure. Counsel should add notices, offer a human option, and keep records to cut risk.

Categorized in: AI News Legal
Published on: Oct 29, 2025
Opt out or buy in? State AI labels put the decision back in your hands

State AI Labeling Laws Are Here. What Counsel Needs To Do Now

Clients and consumers want to know when they're dealing with software instead of a person. Utah and California have moved first with laws that force disclosure of AI use, and more states are looking at similar rules. For legal teams, this isn't theoretical anymore - it's policy, product, procurement, and litigation risk all at once.

Supporters say labels give people a real choice. "If that person wants to know if it's human or not, they can ask. And the chatbot has to say," says Utah's commerce chief, Margaret Woolley Busse.

Snapshot of what's required

  • Utah: State-regulated businesses must disclose AI use to customers. If a user asks whether a chatbot is human, it must disclose that it's AI.
  • California (2019): The BOT Act requires bots that try to influence a sale or a vote to clearly disclose they're bots. See the statute text: SB 1001 (BOT Act).
  • California (2025 expansion): Police departments must specify when AI tools help write incident reports.
  • Local rules: San Francisco requires city departments to publicly report how and when they use AI.

Why this matters for your organization

  • Deception risk: Failing to disclose can invite consumer protection claims and AG scrutiny.
  • Consent: People want the option to decline AI. Absent clear notice, you'll see complaints - and churn.
  • Discovery and records: AI-assisted outputs (like police reports or customer service logs) become evidence. You'll need provenance and versioning.
  • Procurement and vendor risk: If a vendor's tool is "quietly" using AI, your disclosures may be incomplete.
  • Employment and union issues: If AI touches scheduling, performance, or discipline, disclosures and bargaining duties may be triggered.
  • Bias and fairness: Labeling doesn't cure bias. But it makes audits and remediation plans easier to defend.

Immediate action plan for legal teams

  • Inventory AI use: Map every place AI touches customers, employees, or the public (chat, email features, report drafting, fraud, recommendations).
  • Define "AI" for your policy: Align internal definitions with Utah/California triggers to avoid gaps.
  • Stand up disclosure controls: Add clear "AI in use" notices in chat UIs, IVR, emails, onboarding flows, and receipts.
  • Enable on-demand identity: Ensure any bot can answer, "Are you human?" with a truthful, direct response.
  • Offer a human path: Provide a visible handoff to a person and document the SLA.
  • Versioning and logs: Keep records of models, prompts, settings, and human edits tied to each customer or case interaction.
  • Revise contracts: Add AI-use disclosure clauses, audit rights, and notification duties for vendors and resellers.
  • Public sector teams: Post your AI inventory and use-cases; label any AI-assisted report per statute or local policy.
  • Train your staff: Create a one-page script for disclosures and a playbook for opt-out requests.

What good AI labels look like

  • Plain English: "This assistant uses AI. Ask anytime to speak with a human."
  • Contextual: In email: "Summary generated with AI. View full message."
  • Actionable: "Prefer no AI? Click here to switch to a human agent."
  • Logged: Record that the notice was shown and whether a user opted out.

Designing opt-outs that actually work

  • Preference center: Let users set "Always route to human" for certain channels.
  • Per-interaction choice: A simple toggle at the start of chat or a "read original" option in AI-summarized emails.
  • Low friction: No account creation or long forms just to reach a person.
  • Respect downstream: Ensure vendors honor your users' choices.

Voices shaping the debate

Transparency advocates argue that AI "thrives in the shadows" and labeling is the first step toward accountability. The Electronic Frontier Foundation has long pushed for sunlight on government use of AI and automated tools; background here: EFF on AI.

On the other side, some industry voices warn that mandatory labels could chill adoption. As Daniel Castro of ITIF notes, small businesses worry customers will bail if they see an AI tag, even for helpful features.

There's also a push from the federal level to limit a patchwork of state and city rules. White House AI leadership has criticized a "state regulatory frenzy," signaling possible preemption efforts ahead.

Reality check: Consumers do opt out

One homeschool teacher in Washington state recently ditched a longtime email provider after it started auto-summarizing messages with AI. Her concern was simple: who gets to decide what she reads - the sender or the software?

Her story is common. People want choice and consultation. Give them both, and you reduce complaints while keeping AI where it actually helps.

Risk pointers for counsel

  • Marketing claims: Avoid implying a human wrote something when it's AI-assisted. Disclose consistently across channels.
  • Security: Labeling doesn't reveal secrets. You can disclose "AI in use" without exposing models or prompts.
  • Records retention: Treat AI outputs like any other business record subject to retention and litigation hold.
  • Accessibility: Ensure disclosures are readable by screen readers and available in key languages.
  • Testing: A/B test labels for comprehension, not just clicks. Retain screenshots and test plans.

What to watch next

  • More state bills: Expect copy-paste drafts that vary on definitions and penalties.
  • Compelled speech challenges: Companies may test whether certain label mandates go too far.
  • Sector rules: Finance, health, and education may see stricter AI-disclosure standards layered on top of state laws.

Tools and training

If your teams need a simple way to get up to speed on AI concepts and safe use, see these curated pathways: AI courses by job function.

For statutory background on chatbot disclosure, start with California's BOT Act: SB 1001. For civil society perspective on transparency, EFF's overview is a solid primer: EFF: AI and Automation.

Bottom line

Labeling is moving from "nice to have" to mandated. Build disclosures, give people a human path, keep records, and lock this into contracts. Do that, and you'll reduce regulatory risk while keeping useful AI in play.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide