xAI sues Colorado to block AI consumer protection law on constitutional grounds

xAI sued Colorado Thursday to block its AI bias law before it takes effect June 30. The suit argues developing AI is protected speech under the First Amendment and challenges the law on five other constitutional grounds.

Categorized in: AI News Legal
Published on: Apr 12, 2026
xAI sues Colorado to block AI consumer protection law on constitutional grounds

xAI Sues Colorado Over AI Regulation Law, Citing First Amendment Concerns

xAI filed a federal lawsuit Thursday seeking to block Colorado's Consumer Protections for Artificial Intelligence (CPAI) law before it takes effect June 30. The company named Colorado Attorney General Philip Weiser as defendant, challenging six constitutional claims centered on First Amendment and Equal Protection grounds.

The law requires developers of "high-risk" AI systems to exercise "reasonable care" to protect consumers from algorithmic discrimination. It defines high-risk systems as those that make or substantially factor into consequential decisions.

The First Amendment Argument

xAI argues that developing an AI model constitutes an "expressive act" protected by the First Amendment. The company contends that the CPAI effectively forces it to redesign systems by altering training data and system prompts to conform to the state's views on fairness and race.

The lawsuit cites Supreme Court rulings in 303 Creative v. Elenis and Moody v. NetChoice to argue that Colorado is mandating changes to expressive content, triggering "strict scrutiny" review. Under this standard, the government must show a law serves a compelling state interest using the least restrictive means available. xAI claims Colorado fails to meet this threshold.

Other Constitutional Challenges

xAI also argues the law's key terms, including "historical discrimination," are unconstitutionally vague. The company challenges an Equal Protection carve-out that exempts AI used to "increase diversity or redress historical discrimination," calling it a race-based double standard without compelling justification.

The lawsuit further contends the law violates the Dormant Commerce Clause by regulating AI interactions occurring entirely outside Colorado's borders. The clause prohibits states from regulating commerce that happens outside their borders.

What the Law Requires

Developers of high-risk systems must make extensive public disclosures about how their systems are evaluated and what steps mitigate bias. They must notify the state attorney general within 90 days of discovering that a system has caused or is reasonably likely to cause "algorithmic discrimination."

Violations are treated as unfair trade practices under Colorado's Consumer Protection Act, carrying civil penalties of $20,000 per violation. The attorney general holds exclusive enforcement authority.

Background

Colorado Attorney General Weiser previously called the law "really problematic," according to media reports. His office declined to comment on the lawsuit.

The case joins other recent AI litigation. In January, Google and Character.AI settled a lawsuit related to a teenager's 2024 suicide. Last September, Anthropic agreed to a $1.5 billion settlement in a class-action lawsuit over pirated materials used to train its models.

Legal professionals should track this case closely, as the outcome will affect how states can regulate AI systems. For more on AI's legal implications, see AI for Legal and AI Learning Path for Paralegals.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)