DOJ joins xAI lawsuit to block Colorado's AI hiring law ahead of June 30 deadline

The DOJ joined Elon Musk's xAI in federal court this week to block Colorado's AI hiring law before its June 30 deadline. HR teams face real compliance risk-courts could let the law take effect even as litigation continues.

Categorized in: AI News Legal
Published on: Apr 27, 2026
DOJ joins xAI lawsuit to block Colorado's AI hiring law ahead of June 30 deadline

Colorado's AI Hiring Law Faces Federal Court Challenge as June 30 Deadline Approaches

The Trump administration's Department of Justice moved to block Colorado's first-in-the-nation AI hiring regulation this week, joining a lawsuit filed by Elon Musk's xAI. The law - Senate Bill 24-205 - is scheduled to take effect June 30, leaving HR departments and their legal teams facing substantial compliance obligations with uncertain legal footing.

Colorado Governor Jared Polis signed the Anti-Discrimination in AI Act (ADAI) into law in May 2024. It became the first comprehensive state statute requiring both developers and deployers of high-risk AI systems to document their governance practices and report discriminatory outcomes to state authorities.

What the law covers

Under the statute, a "high-risk AI system" includes any software that uses algorithmic processing to influence hiring decisions, performance evaluations, promotion recommendations, or benefits eligibility determinations. There is no minimum employee threshold - even small employers using such tools are subject to the law's requirements.

The law was originally set to take effect in February 2026 but was delayed to June 30 to allow time for legislative amendments. Colorado's legislature is scheduled to adjourn May 13, leaving a narrow window for changes.

The legal challenge

xAI filed its lawsuit April 9 in federal court in Denver, raising six constitutional claims centered on two main arguments. First, the company contends that building an AI model is protected speech under the First Amendment, and that forcing developers to redesign systems to avoid disparate outcomes amounts to government-compelled expression.

Second, xAI argued the law is unconstitutionally vague and burdens interstate commerce by regulating AI systems developed outside Colorado's borders.

The DOJ's April 24 intervention added a new dimension. Federal attorneys argued the law violates the Equal Protection Clause by requiring developers to make decisions based on protected characteristics like race and sex. They also challenged what they characterized as an asymmetrical carveout allowing discriminatory algorithms designed to advance diversity.

This marks the first time the DOJ has moved to block a state AI regulation in court. Colorado's law was the only state AI statute specifically named in President Trump's December 2025 executive order directing federal agencies to challenge state-level AI rules deemed to stifle innovation.

What employers must do before June 30 - if the law takes effect

Unless a court blocks the law, HR departments face substantial compliance work. Legal experts advising Colorado employers recommend the following steps:

  • Audit AI tools. Identify any software used in hiring, performance management, promotion decisions, or benefits eligibility that involves algorithmic processing. Determine whether vendors classify tools as high-risk and what disclosures they have provided.
  • Build a documented governance program. Maintain a written risk-management program specifying personnel, processes, and principles used to identify and mitigate algorithmic discrimination. The National Institute of Standards and Technology's AI Risk Management Framework is explicitly recommended by the statute.
  • Complete written impact assessments. Document each high-risk AI system's purpose, data sources, testing methods, and risks. Annual updates are required.
  • Issue required notices. Publish a public-facing statement on the company website describing its use of high-risk AI. Provide direct notice to any Colorado resident who receives an adverse decision based on an AI-assisted process.
  • Report discriminatory outcomes. Notify the Colorado Attorney General if a deployed AI tool produces discriminatory results. Failure to report could trigger enforcement under the state's Consumer Protection Act.

The law creates no private right of action. Enforcement authority rests exclusively with the Colorado Attorney General's office. Employers who can demonstrate they followed the law's compliance steps benefit from a rebuttable presumption of reasonable care.

The broader context

Colorado is not alone in regulating AI hiring decisions. Illinois and Texas enacted similar laws effective January 2026. California regulatory agencies issued comparable requirements for certain employers. New York City passed an automated employment decision tool law in 2021.

The law attracted criticism before litigation began. LinkedIn co-founder Reid Hoffman called it "not a smart play." The U.S. Chamber of Commerce raised concerns about burden on small businesses and AI adoption. Governor Polis, while signing the bill, warned that its focus on unintentional disparate impact rather than intentional discrimination represented a significant departure from traditional civil rights law.

Legal analysts expect Colorado's outcome to influence AI regulation across multiple states and potentially at the federal level.

What HR leaders should do now

The lawsuit creates uncertainty, but it does not eliminate compliance risk. Courts could allow the law to take effect June 30 while litigation proceeds. An injunction is far from guaranteed.

Employment law specialists advise HR teams to proceed with compliance preparation while monitoring the case. Specific steps:

  • Do not assume the lawsuit will succeed. Legal challenges of this kind often take months or years to resolve. The law may take effect regardless of pending litigation.
  • Engage AI vendors now. Developers of high-risk AI systems have their own disclosure and reporting obligations. Request documentation confirming what risk assessments have been conducted and what data the system uses.
  • Document everything. A well-documented compliance effort will be the primary defense if the law takes effect and the company faces an enforcement inquiry.
  • Monitor Colorado's legislature. A Colorado AI Policy Working Group released a reform proposal in March that would roll back some of the law's most burdensome requirements, including mandatory annual reviews and attorney general reporting. A legislative fix remains possible before the May 13 adjournment, though the window is narrow.

For HR leaders and their legal teams, the core question Colorado's law raises - whether AI systems shaping employment decisions produce fair, accountable results - will not disappear regardless of the federal court's ruling. The lawsuit may delay a legal reckoning. It will not prevent the broader one.

Case name: xAI v. Weiser, 1:26-cv-01515, U.S. District Court, District of Colorado.

For additional guidance on AI governance in employment decisions, see the AI Learning Path for Human Resources, which covers implementation of AI governance programs and management of high-risk systems in hiring and employment contexts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)