State AI laws take effect in the new year: what HR and Legal need to know
AI is now baked into hiring, pay, and performance decisions. That brings efficiency - and legal exposure. With new state rules kicking in and federal policy sending mixed signals, employers are sitting between two moving targets.
Here's the short version: expect more disclosure, documentation, and governance. Build a system once, then apply it everywhere you operate.
Federal momentum vs. state rules
Recent White House actions aim to position the U.S. as an AI leader, including executive orders signed in July and an AI Action Plan. In Congress, a bipartisan bill would require employers to report AI-related layoffs.
States and cities are setting their own terms. That's where things get complicated for HR and Legal.
- New York City: AI in hiring law active since 2023 (official overview)
- California: AI-at-work provision under FEHA effective Oct. 1, 2025 (focused on model safety and whistleblower protections)
- Texas: Responsible Artificial Intelligence Governance Act (TRAIGA) effective Jan. 1, 2026
- Illinois: AI in hiring law effective Jan. 1, 2026
- Colorado: AI in hiring law effective June 2026
What varies by state (and why it matters)
California's update is aimed at the AI models themselves, not HR workflows. Expect added safety expectations and whistleblower protection for employees who raise concerns.
Texas' TRAIGA is a different signal. It largely exempts AI used in employment and commercial contexts, requires that AI not be intended to cause physical harm or aid crime, and says disparate impact alone doesn't prove discriminatory intent - a notable shift from long-standing federal and state standards.
Illinois and Colorado add to the mix with new hiring-focused rules. NYC's law continues to set disclosure and process expectations that many employers emulate nationally for consistency.
The smart move: build to the highest common factor (HCF)
Given the split across jurisdictions, aim your program at the most demanding requirements you face, then standardize. That keeps you from reworking policy every time you cross a state line.
- Disclosure: Tell candidates and employees when AI or automated tools are used in decisions that affect them.
- Risk assessment: Document the purpose, data, and risks for each tool and use case. Refresh on a schedule.
- Opt-out and appeal paths: Offer human review and a clear way to challenge outcomes where required or prudent.
- Record retention: Keep logs, versions, prompts, training materials, and decision rationales per policy.
- Auditing: Test tools before rollout and on a cadence. Validate accuracy, fairness, explainability, and data security.
- Vendor management: Demand transparency, testing summaries, incident history, update notices, and indemnities.
Set up internal AI governance that actually works
- Inventory: Map all AI and automated tools touching hiring, pay, promotions, and performance - including "shadow IT."
- Triage by risk: Classify use cases (high, medium, low). Start controls with high-risk decisions (hiring, termination, pay).
- Policy and playbooks: Write a plain-language AI use policy, selection checklist, testing protocol, and incident response process.
- Training: Teach HR, recruiters, managers, and admins how to use tools, spot issues, and escalate.
- Notices and templates: Prepare candidate/employee notices, consent language (if used), and appeal instructions.
- Data rules: Set retention, deletion, and access controls for inputs, outputs, and model interaction logs.
- Budget: Fund compliance, audits, and outside counsel reviews where appropriate.
What to do now through mid-2026
- Q1 2026: Finalize inventory, publish your AI policy, and switch on disclosures for any hiring use cases.
- Q2 2026: Complete first-round assessments of high-risk tools. Confirm appeal and human review processes are live.
- By June 2026: Align hiring workflows with Colorado and other applicable state rules. Update vendor contracts to reflect testing, notice, and retention needs.
Pragmatic deployment beats hype
Limit AI to clear, high-ROI use cases. Budget for compliance. Decide where you'll lead - and where it's smarter to be a fast follower once standards stabilize.
And keep your stance flexible. This space keeps changing. What works today might be replaced next quarter.
Helpful resources
If you're upskilling HR and Legal teams on safe, compliant AI use, explore curated learning paths by role at Complete AI Training.
Note: This article provides general information and is not legal advice. Consult counsel for jurisdiction-specific guidance.
Your membership also unlocks: