Client Alert: New AI Laws Will Prompt Changes to How Companies Do Business
Regulators have moved from guidance to rules. New AI laws across the EU and key U.S. states will change how companies design products, run HR, communicate with customers, and manage vendors. Legal teams will be asked to translate broad principles into daily guardrails fast.
This alert maps the practical shifts you should expect and the moves to make now to reduce risk without slowing the business.
Where the pressure is coming from
- European Union: The EU's AI regime uses a risk-based model with outright bans for certain uses, strict duties for "high-risk" systems, transparency for content and chatbots, and governance expectations for model providers. Obligations phase in over the next 1-3 years, with meaningful penalties.
- Colorado: A comprehensive law sets duties for "developers" and "deployers," including risk management programs, impact assessments for consequential decisions, notices to consumers, and documented safeguards. Effective dates are staggered, so planning must start now.
- California: Privacy regulators are advancing automated decision-making rules and expanding enforcement under CPRA, with likely obligations around disclosures, opt-outs/appeals, and assessments for certain uses.
- Illinois: Existing laws like BIPA and the AI Video Interview Act already drive high litigation risk for biometric and candidate-screening use cases.
- Texas: Broad privacy duties and targeted rules on synthetic media and consumer protection increase exposure for AI-enabled marketing and content tools.
EU AI Act (EUR-Lex) | Colorado SB24-205
What this means for in-house counsel
- Transparency moves from nice-to-have to required. Expect notices when AI substantially influences decisions, candidate/applicant disclosures, and content labeling for synthetic media in some contexts.
- Bias testing and human oversight become standard. Hiring, lending, housing, insurance, and healthcare tools will need documented testing, meaningful human review, and an appeals channel.
- Risk management is no longer optional. Laws call for formal AI risk programs, assessments before deployment, and ongoing monitoring with logs you can produce on demand.
- Vendor contracts must evolve. You will need warranties on training data provenance, use rights, model changes, security, bias testing, evaluation rights, and incident reporting.
- Recordkeeping is a core control. Maintain an AI system register, decisions logs (where required), assessments, test results, and updates/version history.
Operational impacts by function
- Product/Engineering: Classify use cases by risk, embed safeguards (inputs, outputs, human-in-the-loop), and document testing. Add kill switches and fallback procedures.
- HR/Talent: Limit automated screening, apply bias audits, provide candidate notices, and keep a clear appeals process with human review.
- Marketing/Comms: Label AI-generated content when required, manage synthetic media risks, and tighten claim substantiation for AI features.
- Customer Operations: Disclose when chatbots meaningfully influence outcomes and ensure easy escalation to a human.
- Procurement/Vendor Risk: Update due diligence questionnaires to cover training data, evaluation results, bias controls, security, and downstream subprocessor use.
- Security/Data: Map data flows, minimize sensitive inputs, and align with existing privacy and security programs to prevent model and prompt injection risks.
The 90-day action plan
- 1) Build an AI inventory. Identify every tool, model, and automated decision in use or in flight. Flag consequential uses (hiring, lending, benefits, safety, eligibility).
- 2) Classify risk and assign owners. Tag systems as prohibited, high-risk, or limited-risk under applicable regimes. Name a business owner and a legal/ risk partner for each.
- 3) Stand up lightweight governance. Charter an AI risk committee, define RACI, and set approval gates for new deployments and material model changes.
- 4) Create core artifacts. AI policy, use standards, impact/risk assessment template, testing protocol, human oversight playbook, incident/runbook, and model change log.
- 5) Update notices and rights. Candidate and consumer notices, explanation language for adverse decisions, and an appeals process where required.
- 6) Refresh contracts. Add AI-specific clauses to MSAs, DPAs, and SOWs: training data rights, IP and indemnities, bias testing/evaluation, transparency duties, audit rights, and decommission/exit terms.
- 7) Train your teams. Brief product, HR, marketing, and procurement on the new rules and your internal process. Keep it practical and role-specific.
Documentation regulators will ask for
- AI system register with purpose, data sources, risk class, and owners
- Pre-deployment impact/risk assessments and updates after material changes
- Testing and evaluation results (bias, performance, red-teaming where applicable)
- Human oversight procedures and decision appeal records
- Vendor due diligence and contract terms tied to AI duties
- Post-market monitoring logs and incident reports
Litigation and enforcement exposure
- Privacy and biometrics: Suits under laws like BIPA remain active and expensive, especially for face/voice features and workplace monitoring.
- Consumer protection: Claims for unfair or deceptive practices tied to undisclosed AI use, false claims about model capabilities, or "dark patterns."
- Employment: Disparate impact claims tied to automated screening and assessments without adequate testing or human review.
- Regulatory actions: EU market surveillance, state AGs, and privacy regulators will expect documented compliance, not verbal assurances.
Timeline planning
Expect phased deadlines across jurisdictions over the next 12-36 months, with earlier effects for bans and transparency duties and later milestones for high-risk systems. Work backward from the longest lead items: bias testing, impact assessments, and contract changes take time.
Quick checklist
- Inventory AI uses and classify risk
- Adopt an assessment and testing process
- Enable human review and appeals where needed
- Refresh notices and disclosures
- Update vendor diligence and AI contract terms
- Stand up governance and recordkeeping
- Train teams and monitor regulatory updates
If your team needs structured upskilling on AI governance and risk, explore role-based options here: AI courses by job.
Your membership also unlocks: