Artificial Intelligence Legislative Update: What In-House Counsel Needs to Track Now
Congress passed the "One Big Beautiful Bill" on July 4, 2025, without the proposed 10-year moratorium on state AI regulation. With no comprehensive federal statute in place, states are moving fast and in different directions. That patchwork raises real compliance questions for companies building or deploying AI across operations.
Below is a concise briefing on major state actions, a recent White House executive order, and practical steps to reduce risk while keeping programs on schedule.
White House Executive Order on AI
On December 11, 2025, the White House issued "Ensuring a National Policy Framework for Artificial Intelligence," aiming for a minimally burdensome national approach. The order directs an AI Litigation Task Force to challenge state AI laws that conflict with that policy. Until federal law preempts, expect continued state-by-state variance and active enforcement at the state level.
Colorado Artificial Intelligence Act (effective June 30, 2026)
Colorado's statute is currently the most sweeping. It covers "developers" and "deployers" of AI systems operating in Colorado and focuses on preventing "algorithmic discrimination," especially in "consequential decisions" (education, employment, lending, insurance, healthcare, housing, legal services).
- Program requirements: AI risk management policy and program, plus AI impact assessments to identify and mitigate risks of algorithmic discrimination.
- Consumer disclosures: Notice of high-risk AI use; basis for adverse consequential decisions; right to appeal for human review.
- Public transparency: Website disclosures on types of high-risk AI, risk controls, and data sources/uses.
- SMB carveout: Certain requirements don't apply to employers with fewer than 50 FTEs that meet specific conditions (e.g., no proprietary training data; use developer's impact assessment).
- Enforcement: Colorado Attorney General.
Utah Artificial Intelligence Policy Act (sunsets July 2027)
Utah emphasizes transparency for generative AI. It requires clear disclosure when a user is interacting with GenAI rather than a human. Coverage includes "suppliers" in consumer transactions and licensed "regulated occupations" engaged in high-risk GenAI interactions.
- High-risk interactions: Collection of sensitive data (health, biometric, financial) or provision of financial, legal, medical, or mental health advice likely to be relied on for significant decisions.
- Mental health chatbots (May 7, 2025): Must be clearly identified as AI; prohibits sharing identifiable health information with third parties; advertising must be clearly labeled.
- Enforcement: Utah Attorney General; penalties up to $2,500 per violation.
Texas Responsible Artificial Intelligence Governance Act (TRAIGA) (effective January 1, 2026)
TRAIGA applies to developers and deployers in Texas and prohibits AI systems that:
- Intentionally encourage or incite physical harm or criminal activity.
- Infringe or impair constitutional rights.
- Unlawfully discriminate against protected classes.
- Produce or distribute certain sexually explicit content or child sexual abuse material, including deepfakes involving minors.
- Transparency: Clear, conspicuous consumer disclosure of AI interaction (plain language; hyperlink to a dedicated page permitted).
- Enforcement: Texas Attorney General; up to $12,000 per curable violation and $200,000 per non-curable violation.
California ADMT Regulations under the CCPA (effective January 1, 2027)
California's Automated Decision-Making Technology (ADMT) rules cover for-profit businesses subject to CCPA thresholds. ADMT includes any tech that processes personal information and replaces or substantially replaces human decision-making-capturing many AI use cases.
- Pre-use notice for "significant decisions": Financial/lending, housing, education, employment/contracting/compensation, healthcare.
- Notice content: Purpose; how ADMT processes personal information; data categories analyzed; output type; opt-out right; access right; how decisions occur if a consumer opts out.
- Timing: Prominent notice at or before collection; acknowledge consumer requests within 10 days; respond within 45 days.
- Risk assessments: Required before use and at least every 3 years; maintain for 5 years; address model logic/assumptions, outputs and use, anti-discrimination controls, and workforce training. Submit an attestation to the California Privacy Protection Agency.
AI Governance and Risk Management: What Good Looks Like
NIST's AI Risk Management Framework (AI RMF) offers a common, practical structure across industries: Govern, Map, Measure, Manage. Adopting its concepts helps align with state expectations around risk assessments, transparency, and controls-without locking you into a one-size-fits-all model.
Reference: NIST AI RMF and the California Privacy Protection Agency for ADMT updates.
Action Guide for Legal Teams
1) Map exposure and classify use cases
- Inventory all AI/ADMT systems by function, data processed, decision impact, and geography.
- Flag "consequential" or "significant decision" use cases (Colorado/California) and high-risk GenAI interactions (Utah).
- Identify third-party models, APIs, and vendors; capture contractual dependencies and data flows.
2) Build policy, assessments, and documentation
- Adopt an AI policy with roles, thresholds for "high-risk," escalation paths, and sign-off gates.
- Stand up risk/impact assessments that cover model purpose, logic, inputs, outputs, testing, monitoring, and bias controls.
- Calibrate retention schedules and evidence trails (e.g., assessments, testing logs, approvals) to meet audit and attestation needs.
3) Operationalize transparency and consumer rights
- Draft standard pre-use notices, opt-out workflows, and access responses aligned to California timing rules.
- Publish website disclosures for high-risk AI (Colorado) and AI interaction notices (Utah/Texas) in plain language.
- Set up human review and appeal paths for adverse consequential decisions; train staff handling escalations.
4) Embed bias, security, and data controls
- Bias: Define protected class testing protocols, fairness thresholds, and re-training triggers.
- Security: Restrict training and prompts containing sensitive data; log and monitor model access and outputs.
- Data governance: Track sources, permissions, and license terms; segregate high-risk data; apply minimization.
5) Vendor and model governance
- Contract for disclosures (model purpose, training data provenance, evaluation results), change-management, and audit rights.
- Require timely sharing of impact assessments, incident reports, and material model updates.
- Add indemnities and clear allocation of responsibility for regulated uses and consumer claims.
6) Training and cross-functional readiness
- Train product, data science, HR, claims, underwriting, lending, and customer support on notice/opt-out, appeals, and recordkeeping.
- Run tabletop exercises for adverse decisions, discrimination allegations, or content misuse (e.g., deepfakes).
- Align internal review boards with legal, privacy, and security to approve or halt deployments.
7) Monitor, test, and iterate
- Schedule periodic evaluations for drift, bias, and performance degradation; document fixes.
- Re-assess risk at material changes in data, model, or use case; update notices and assessments accordingly.
- Track state rulemaking and AG guidance; adjust controls before effective dates.
Key Timelines and Enforcement Signals
- Texas TRAIGA: Effective January 1, 2026; high penalties for non-curable violations.
- Colorado AI Act: Effective June 30, 2026; AG enforcement; extensive disclosures and assessments.
- California ADMT: Effective January 1, 2027; strict notice, access, opt-out, and assessment duties with CPPA oversight.
- Utah AI Policy Act: Sunsets July 2027; maintain GenAI disclosures and mental health chatbot restrictions until then.
Practical Next Steps This Quarter
- Stand up a cross-functional AI review council and freeze new high-risk launches until assessments and notices are in place.
- Publish or update AI disclosure pages and consumer-facing notices; verify accessibility and clarity.
- Run a vendor paper chase: obtain model documentation, impact assessments, and update commitments.
- Build a central repository for assessments, approvals, and testing evidence to support AG or CPPA inquiries.
Need team upskilling on AI basics and risk controls?
Consider short, practical courses to align legal, product, and data teams on terminology and workflows: Latest AI courses.
Your membership also unlocks: