Using Generative AI: Key Legal Issues (November 2025)
Generative AI (GenAI) is now embedded in research, document drafting, customer support, HR workflows, and product features. With that adoption comes legal exposure across privacy, data security, IP, and commercial contracts.
This article distills the practical issues legal teams are confronting today and the controls that actually reduce risk.
Table of Contents
Key takeaways
- Entering trade secrets into public or vendor-hosted GenAI can waive protection and create disclosure risk.
- GenAI vendors introduce security, IP, and regulatory compliance exposure that require structured diligence.
- Confusing or misleading AI outputs can trigger UDAP liability, especially in ads, customer communications, and high-stakes advice.
- Clear policies, staff training, and sandboxed deployments reduce error, bias, and leakage.
- Contract for indemnities covering output errors, bias, IP claims, and regulatory penalties.
Privacy and Data Security
Core obligations
GenAI thrives on large datasets that often include personal information. Treat both training data and prompt/output data as regulated. Expect duties around consent, lawful basis, minimization, security, transparency, and bias controls. Automations that profile or influence consequential decisions draw heightened scrutiny.
US state AI laws
Colorado's CAIA (effective 2026) adopts a risk-based model aimed at preventing algorithmic discrimination. It imposes transparency and risk management duties on developers and deployers of high-risk systems used for consequential decisions in areas like employment, education, housing, health care, and legal services.
Utah's 2025 amendments (S.B. 226) require clear disclosures when consumers interact with GenAI, with stricter rules for regulated occupations and high-risk interactions (for example, health, financial, or legal advice). A safe harbor exists if disclosures are clear, conspicuous, at the outset, and persistent during the interaction.
US state consumer privacy laws
More than 20 states now grant access, deletion, and correction rights. Many address profiling and automated decisions with opt-out or opt-in frameworks. The CCPA requires notice at collection, which can be tricky for dynamic prompts and evolving model uses.
Operationalize rights requests early: design prompts to avoid unnecessary personal data, map data flows, and align privacy notices and terms with actual AI uses, including training or fine-tuning practices.
Transparency and sector rules
Expect enforcement under state UDAP laws and Section 5 of the FTC Act. Sectoral laws (for example, HIPAA, FERPA) still apply if data or use cases fall within those regimes. Local laws like NYC's AEDT Rule (Local Law 144) govern automated employment tools used for hiring or promotion, including audit and notice requirements.
International laws
The GDPR applies to personal data used to train, fine-tune, or operate GenAI, with extraterritorial reach. Key principles include fairness, transparency, purpose limitation, minimization, accuracy, storage limitation, and security. Recital 71 urges safeguards for profiling, error correction, and preventing discriminatory effects.
The EU AI Act (2024/1689) is risk-based and extraterritorial. It imposes duties on providers and users of AI systems, with specific obligations for general-purpose AI (GPAI) and stricter rules for GPAI with systemic risk. If your outputs are used in the EU, or your systems are placed on the EU market, assume coverage and plan compliance timelines now.
AI hallucination
Hallucinations are confident, wrong answers without grounding in training data. In high-stakes contexts (health, employment, education, finance, legal), they create real harm and liability. Use human-in-the-loop reviews, citations or source retrieval where feasible, and deploy models only in defined scopes with evaluation gates and logging.
IP
Copyrightability of AI output
US law protects human authorship. Works created solely by GenAI are unprotectable and unregistrable. Courts and the Copyright Office have reaffirmed this, including the Thaler decision and follow-on guidance and reports.
Human contributions can be protectable when they show original authorship. Examples include the selection and arrangement of AI-generated elements, or meaningful edits that cross the creativity threshold. The Copyright Office's 2023 guidance and its 2025 report on AI content outline how to disclose and claim human-authored portions.
Infringement risk for users
Users may not know whether outputs are substantially similar to copyrighted training materials. Courts have recognized liability where a party copies from a copy, which could be argued if an output tracks a protected work used in training without permission. Commercial use heightens exposure.
Practical steps: run similarity checks, document prompts and edits, avoid publishing raw outputs in sensitive categories, and secure licenses or use model settings that exclude training on third-party works where available.
Infringement risk for developers and vendors
Developers face lawsuits alleging reproduction of copyrighted works during training and claims that outputs are unauthorized derivative works. Some DMCA claims have been dismissed at pleadings, but core infringement claims are proceeding. The mechanics of copying during training are central, and discovery is ongoing across cases.
Expect pressure for dataset provenance, opt-out mechanisms, and content filtering. Contractual representations about lawful data sourcing and honoring takedown or opt-out signals are becoming standard asks from enterprise buyers.
Commercial Transactions
Vendor diligence
- Provenance: How were datasets collected? Any licensed corpora? Honor for robots.txt or opt-outs?
- Privacy program: Lawful basis, notice at collection, minimization, retention, and cross-border transfer mechanisms.
- Security: Model and application security, data segregation, encryption, access controls, red-teaming, and incident response.
- Model behavior: Bias testing, hallucination rates, evaluation methodology, and intended-use limitations.
- Governance: Versioning, change logs, deprecation policy, and auditability of decisions.
- Compliance: Mapping to CAIA, Utah disclosures, GDPR, and EU AI Act duties.
Contract controls that matter
- Scope and use: No training on customer data without express consent. Clear boundaries for inputs, outputs, and derived insights.
- Privacy and security: Align to state privacy laws, GDPR where applicable, and recognized security standards. Include data maps, retention, and deletion commitments.
- Bias and safety: Risk assessments, testing obligations, and remedial duties if harmful bias or safety issues are found.
- Transparency: Disclosures for AI interactions where law requires; logs and explanations for consequential decisions where feasible.
- IP warranties: Non-infringement, lawful data sourcing, and no hidden third-party rights. Output license rights that match your business model.
- Indemnities: Cover IP claims, output errors and hallucinations, bias or discrimination claims, UDAP violations, and regulatory penalties.
- Liability: Appropriate caps with carve-outs (for example, IP infringement, data breach, willful misconduct). Require cyber/tech E&O insurance.
- SLAs and support: Response times, model availability, degradation handling, rollback options, and safe updates.
- Audit and oversight: Audit rights, third-party assessments, and evidence of controls. Cooperation for regulatory inquiries.
- Incident handling: Timely notice, containment steps, forensic support, and customer communications cooperation.
- International: DPA and SCCs/transfer mechanisms, localization where needed, and alignment to EU AI Act obligations if outputs are used in the EU.
- Termination: Data export, model artifacts where agreed, deletion certifications, and transition assistance.
Marketing and UDAP
Claims about GenAI accuracy, "bias-free" performance, or outcomes can mislead. Disclosures help, but they do not cure deceptive claims. Substantiate performance statements and ensure disclaimers match real limitations and testing data.
Internal governance that prevents problems
- Policy: Approved use cases, prohibited inputs (PII, PHI, payment data, trade secrets), review gates for deployment, and citation requirements.
- Access: Role-based controls, sandboxing, and environment isolation for testing versus production.
- Training: Prompt hygiene, privacy basics, IP do's and don'ts, and how to verify outputs before use.
- Risk reviews: DPIAs/AIA risk assessments for high-impact use, with sign-offs from Legal, Security, and Risk.
- Monitoring: Logging, feedback loops, red-teaming, and periodic re-testing after model updates.
Trade secrets
Treat public GenAI inputs as a disclosure. Use enterprise instances with contractual confidentiality, or keep secrets out of prompts entirely. Reinforce this in policy, onboarding, and periodic training.
Action list for legal teams
- Inventory GenAI use cases, data categories, and jurisdictions impacted.
- Map obligations to CAIA, Utah S.B. 226, state privacy laws, GDPR, and the EU AI Act.
- Adopt standard diligence and contracting checklists for AI vendors and features.
- Implement review gates for high-risk outputs and add clear consumer disclosures where required.
- Track litigation and agency guidance; adjust templates and playbooks as case law and rules mature.
If your team is building AI literacy programs for business stakeholders, a curated skills-by-job catalog may help. See Complete AI Training: Courses by Job.
Your membership also unlocks: