AG Nessel Pushes Back on Potential Federal Ban of State AI Laws
Michigan Attorney General Dana Nessel, alongside a bipartisan group of 35 other attorneys general, urged Congress to reject any move to block state AI laws. Reporting indicates a federal ban could be inserted into a military funding bill. A similar effort failed over the summer after states objected.
The core argument: with no comprehensive federal AI framework in place, states need the freedom to protect residents from harmful AI use. Nessel emphasized that limiting state authority would weaken consumer protection, public safety, and election integrity.
Why this matters for legal teams
AGs say AI delivers value in health care and public safety, but its misuse is already creating risk at scale. Recent cases highlight AI's role in deceptive "grandparent" scams, inappropriate interactions with minors, and content that can encourage self-harm.
Michigan already restricts AI in political campaigns and creates both a civil cause of action and a criminal offense for distributing pornographic deepfakes. Other states have acted on AI-generated voter misinformation, spam robocalls, deceptive marketing, data privacy, and algorithmic pricing.
The legal stakes: preemption, enforcement, and liability
A federal rider that preempts state AI laws would reshape risk across consumer protection, privacy, elections, and advertising. Depending on scope and any savings clauses, it could limit AG enforcement, narrow private causes of action, and shift reliance to federal agencies.
For in-house counsel and compliance leaders, the difference between a federal floor (minimum standards) and a federal ceiling (preemptive prohibition) is material. It affects multistate programs, contract drafting, litigation posture, and how you structure disclosures and controls.
If a state-law ban passes: potential consequences
- Reduced ability for AGs to act quickly against novel AI abuses; more dependence on slower federal rulemaking or case-by-case actions.
- Fewer state-level claims in consumer suits; defendants may remove cases or seek dismissals based on preemption.
- Election-related AI rules could be curtailed right before high-stakes cycles, raising disinformation exposure.
- Greater uniformity for companies, but at the cost of fewer safety backstops for residents.
What counsel should do now
- Map AI use across products, marketing, customer service, and operations; document model inputs, outputs, and human oversight.
- Strengthen claims substantiation for AI-enabled features; keep records that support performance, accuracy, and safety representations.
- Update disclosures for synthetic media, automated decisioning, and content authenticity; implement deepfake and bot labeling where required.
- Review election-adjacent workflows (political ads, synthetic voices, image/video tools) for state restrictions that are already in force.
- Tighten robocall/robotext compliance, consent capture, and opt-out flows; vet vendors that use AI for outreach or personalization.
- Push vendors for transparency on model provenance, training data, safety testing, and downstream use; add audit and indemnity clauses.
- Run tabletop exercises for AI incidents: harmful outputs, misinformation, or child-safety issues; pre-draft escalation paths.
- Track the federal bill text and any preemption/savings clauses; prepare comment letters and enforcement-readiness plans for either outcome.
For reference, see the NIST AI Risk Management Framework for control selection and governance, and recent FTC guidance on AI advertising and deception:
If your team needs practical upskilling to support AI policy and review workflows, explore curated programs by job role: Complete AI Training - Courses by Job.
States and territories joining Michigan's letter
Attorneys general from the following jurisdictions joined Nessel's letter to Congress:
- American Samoa
- Arizona
- California
- Connecticut
- Delaware
- District of Columbia
- Hawaii
- Idaho
- Illinois
- Indiana
- Kansas
- Louisiana
- Maine
- Maryland
- Massachusetts
- Minnesota
- Mississippi
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- North Carolina
- Northern Mariana Islands
- Ohio
- Oregon
- Pennsylvania
- Rhode Island
- South Carolina
- Tennessee
- Utah
- Vermont
- Virgin Islands
- Washington
- Wisconsin
Bottom line
States are pressing to keep their authority to police AI harms while Congress weighs federal action. Whether or not a preemption clause survives, treat AI risk as an active regulatory issue and keep your compliance, contracts, and incident playbooks ready.
Your membership also unlocks: