Trump's AI EO: Preemption Push Doesn't Shift Employer Liability
The Dec. 11, 2025 executive order on artificial intelligence seeks a unified national policy and less regulatory fragmentation. It directs federal agencies to scrutinize and challenge state AI statutes that clash with federal priorities. That may change governance for AI developers and vendors. It does not change the civil rights laws that decide employer liability.
Key Takeaways
- The EO targets state-level AI regulations and encourages federal preemption efforts. It does not modify Title VII, the ADA, the ADEA, Section 1981, or analogous state civil rights laws.
- Employer exposure still turns on whether an employment practice discriminates, regardless of whether a human or an algorithm is involved.
- Treat AI-influenced decisions like any other selection procedure: job-relatedness, business necessity, and disparity analysis remain central.
What the EO Reaches - And What It Doesn't
The Administration signals an intent to coordinate a national approach to AI. Agencies are directed to identify state AI laws that conflict with federal objectives, the Department of Justice will lead related efforts, and the secretary of commerce will catalogue state requirements viewed as burdensome.
None of this alters the statutes that govern employment decisions. Title VII, the ADA, the ADEA, Section 1981, and state civil rights laws continue to apply because their mandates attach to the decision's effect, not the tool used to make it.
Civil Rights Law Still Anchors AI-Related Employment Risk
The threshold question is unchanged: does the practice produce unlawful discrimination? If an automated system functions as a selection procedure, the Uniform Guidelines on Employee Selection Procedures (UGESP) still guide the analysis-job-relatedness, business necessity, and adverse impact testing. Regulators continue to reference these principles when assessing tools that influence hiring, promotion, and other decisions.
State antidiscrimination laws remain fully operative. Even if state AI-specific rules are narrowed or preempted, those independent civil rights obligations persist.
Uniform Guidelines on Employee Selection Procedures (UGESP)
EEOC: Applying Title VII to Automated Decision Tools
Two Tracks: Different Laws, Different Risks
AI-specific statutes (e.g., Colorado's AI Act or draft California rules) regulate how systems are built, deployed, audited, and disclosed. These are the focus of the EO's preemption message. By contrast, civil rights laws govern the legality of employment outcomes and remain untouched by the EO.
For employers, both tracks matter. Governance obligations reduce operational and disclosure risk; civil rights compliance manages litigation risk.
Courts Are Applying Familiar Principles to AI Tools
Litigation over automated hiring and screening tools is moving forward under established theories: disparate treatment, disparate impact, and vendor-agency theories. Courts have allowed several cases to proceed without any new statute. The EO does not slow this trend.
Why Reviewing AI-Assisted Decisions Still Matters
Courts and regulators evaluate actual effects. If an AI-influenced step yields disparities, expect questions about validation, business necessity, and alternatives. Proactive analysis helps frame those answers and supports continued use where justified.
Where appropriate, conduct reviews under privilege, retain documentation, and refresh analyses as tools, data, or job requirements change.
Practical Guidance for Employers
- Map the influence points. Identify where automated tools affect sourcing, screening, ranking, interviews, promotions, or terminations.
- Apply traditional discrimination frameworks. Run adverse impact analyses, check for disparate treatment risks, and scrutinize vendor-proposed settings and features.
- Document job-relatedness. Keep clear records of criteria, validation evidence, and the business rationale linking inputs to essential job functions.
- Govern vendors like extensions of your team. Allocate responsibilities for testing, data quality, updates, and change control; require access to audit logs and model documentation where feasible.
- Monitor preemption efforts-don't rely on them. Even if AI governance rules shift, civil rights exposure remains.
- Build adaptable review cycles. Set periodic testing, trigger re-reviews after model updates, and align HR, legal, compliance, and DEI stakeholders.
- Preserve evidence. Retain versions of models, configurations, prompts, training data summaries, and decision records consistent with recordkeeping obligations.
The Bottom Line
The EO may streamline AI governance at the state-federal level, but it leaves employment liability untouched. Title VII and analogous state statutes remain the core standard for AI-assisted decisions. Preemption debates might shift compliance burdens for developers and vendors; they do not change the legal test that will be applied to your employment practices.
Your membership also unlocks: