Federal AI Hiring Guidance Is Gone. Four States Are Writing Their Own Rules.
The EEOC removed its artificial intelligence employment guidance from its website in January 2025 and has not restored it. As of March 2026, the pages remain down. In the year since, four states enacted their own AI employment laws, each using a different legal standard to regulate how companies screen job applicants with AI tools.
The removal matters less than it appears. The underlying law did not change. Title VII of the Civil Rights Act and the Uniform Guidelines on Employee Selection Procedures still apply to AI hiring systems the same way they apply to any other screening method. What disappeared were the technical documents explaining how.
What Was Removed
On March 21, 2026, eeoc.gov/ai returned a 404 error. The Wayback Machine confirms the page existed with full content as recently as December 23, 2024. It contained technical assistance documents, enforcement statements, and guidance the EEOC published since launching its Algorithmic Fairness Initiative in October 2021.
A second page, "Artificial Intelligence and the ADA," still appears on the EEOC website but functions as a shell. The detailed guidance documents it links to return errors. The removal appears to be part of the broader rescission of Biden-era policies that followed Executive Order 14179.
The Law Remains in Force
Title VII prohibits both disparate treatment and disparate impact in employment. It applies to AI tools regardless of whether the EEOC publishes guidance on the subject.
The Uniform Guidelines on Employee Selection Procedures, adopted jointly by the EEOC, Department of Labor, Department of Justice, and Office of Personnel Management in 1978, also remain in force. The guidelines require that any selection procedure producing an adverse impact be validated as job-related and consistent with business necessity. AI-driven screening tools fall within this definition.
The EEOC's Strategic Enforcement Plan for fiscal years 2024-2028 explicitly identifies "technology-related employment discrimination" as an enforcement priority. This plan can only be modified by a quorum vote of commissioners and remains unchanged on the agency's website.
Four States, Four Different Standards
California finalized regulations on automated decision systems, effective October 1, 2025. The state applies a disparate impact framework to any computational process that makes or facilitates employment decisions. Employers cannot use an AI system that discriminates through disparate treatment or disparate impact. The regulations extend liability to AI vendors as employer "agents." Employers must retain records for four years.
Illinois amended the Illinois Human Rights Act through legislation effective January 1, 2026. The standard is disparate impact - using AI "that has the effect of" discriminating on the basis of a protected class is a violation. Illinois provides a private right of action, allowing individuals to file complaints. Penalties can reach $70,000 per violation for repeat offenders.
Texas enacted the Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The standard differs fundamentally. It is unlawful to develop or deploy an AI system "with the intent to unlawfully discriminate," but the law explicitly states that "a disparate impact is not sufficient by itself to demonstrate an intent to discriminate." An employer could have an AI tool producing discriminatory outcomes and face no liability under state law if intent cannot be shown.
Colorado enacted a law effective June 30, 2026, using a "reasonable care" standard for deployers of high-risk AI systems making employment decisions. Deployers must implement risk management programs, complete impact assessments, and provide notices. The law creates an affirmative defense for businesses following the NIST AI Risk Management Framework.
An employer using AI hiring tools across these states must satisfy all four standards simultaneously, with no federal framework to unify the approach.
The Courts Are Moving Ahead
A federal court in California is testing whether an AI software vendor - not the employer that uses the tool, but the company that built it - can be held liable for discriminatory hiring outcomes.
In Mobley v. Workday, Inc., a plaintiff alleged that Workday's AI-powered applicant screening system discriminated against him on the basis of race, age, and disability across more than 100 job applications. In July 2024, the court denied Workday's motion to dismiss. In May 2025, the court granted conditional certification of age discrimination claims for a nationwide collective of applicants age 40 and older.
Workday disclosed that its software rejected 1.1 billion applications. The certified collective could include hundreds of millions of members. The EEOC filed an amicus brief supporting the plaintiff's legal theories in April 2024, before the agency removed its AI guidance.
Enforcement of existing AI hiring laws has been sparse. A December 2025 audit by the New York State Comptroller examined enforcement of NYC Local Law 144, the nation's first law requiring bias audits of AI hiring tools. The city received only two complaints in two years. When auditors reviewed 32 employer websites, they identified at least 17 instances of potential non-compliance that enforcement officials had missed.
What the EEOC Is Doing Now
EEOC Chair Andrea Lucas has publicly outlined enforcement priorities that include combating "unlawful DEI-motivated race and sex discrimination" and defending "the biological and binary reality of sex." AI-related employment discrimination does not appear among her stated priorities.
The Strategic Enforcement Plan listing AI as a priority remains in force. Whether it will translate into enforcement activity under current leadership is an open question the agency has not addressed publicly.
What Employers Should Do
The federal guidance vacuum is real. The legal obligations have not changed.
Document everything about every AI tool used in employment decisions. Record what the tool does, what data it uses, what outputs it produces, and how those outputs influence actual hiring decisions. This is the foundation under every state standard and remains the baseline expectation under Title VII.
Conduct vendor due diligence. The Mobley litigation underscores that the relationship between employers and AI vendors carries legal weight. Ask vendors what bias testing they perform, what demographic data they use, what their impact ratio results show, and what they will provide if a regulator or court asks. Get the answers in writing.
Know which state laws apply to you. California, Illinois, Texas, and Colorado each impose distinct obligations. Illinois allows individuals to file complaints. Texas requires proof of intent. California extends liability to vendors. Colorado creates an affirmative defense for following NIST. If you operate across state lines, you need a compliance program that addresses each state's standards specifically.
Use the NIST AI Risk Management Framework as a technical anchor. The framework is voluntary, not law - but it is the closest thing to a neutral federal technical standard that still exists. Colorado's law references it explicitly. Building your AI governance around a recognized framework gives you a defensible methodology regardless of jurisdiction.
Watch Mobley v. Workday. If the court holds that an AI vendor can be liable as an employer's "agent" for discriminatory outcomes, it will reshape how companies evaluate AI hiring tools and vendor relationships. The stakes are not theoretical.
The information gap left by the EEOC's removed guidance will not be filled by the federal government any time soon. What fills it instead is the combination of your documentation, your audit records, your vendor agreements, your state-law compliance materials, and your awareness that the legal obligations never went away - even when the explainer did.
For more on how AI regulation affects your work, see AI for Government and AI for Legal.
Your membership also unlocks: