AI, Privacy, and Consumer Protection in 2025: What IT, Legal, and Product Teams Need to Know
AI moved from pilot to production in 2025, and scrutiny followed. Regulators targeted unproven marketing claims, unclear data practices, and risks to children. Private plaintiffs tried fresh angles-from chatbot interception to training data claims. Courts sent mixed signals, but a few patterns are clear enough to act on now.
I. Consumer-Protection Actions
State actions. State attorneys general turned up the heat on chatbots and data use. A bipartisan group warned leading AI developers they'll be held accountable for harms tied to consumer data, with extra attention on kids. Texas also opened probes into claims that chatbots can act like therapists. The message is simple: if your product talks to consumers, your disclosures, parental controls, and claims need to match reality.
Federal actions. The FTC continued to police AI marketing and privacy practices under Section 5. It pursued cases against companies overstating AI capabilities, sought injunctions and monetary relief, and in at least one case, a permanent ban. The agency also paid out millions to consumers tied to deceptive data practices and used its investigatory powers to examine AI companion chatbots-asking about data collection, training, retention, and safeguards for minors under COPPA.
Private actions. Plaintiffs pushed novel theories, including claims that companies exploited "cognitive labor" from user interactions without compensation. One such case was dismissed for lack of a workable legal theory, but expect more creative filings that try to reframe AI data practices as unfair or deceptive.
II. Privacy Laws: Applying Old Statutes to New AI Uses
Chatbots and electronic-communications statutes. Courts weighed whether AI customer-service bots amount to unlawful interception. In one case involving a restaurant-ordering assistant, a court allowed a CIPA claim to proceed, focusing on whether the provider acted as a "third party" using call data for its own purposes (like system improvement) rather than solely to serve the consumer. Where data is used for both service delivery and commercial improvement, courts seem open to wiretapping theories at early stages. Other courts, however, have dismissed claims as too speculative when plaintiffs couldn't tie training use to an actual interception or disclosure event.
Training data and invasion-of-privacy claims. Suits also targeted data brokers and model improvement practices. One California case allowed claims to move forward where plaintiffs alleged a broker used AI tools to collect, combine, and sell personal information from online and offline sources without clear consent. Disclosure, consent, and purpose limitations are doing real work in these disputes.
III. Related Developments: State Laws and Court Rules
States kept busy on AI. California, Colorado, and Texas advanced new AI statutes, and more than half of states passed laws to address "deepfakes" involving a person's body or voice. Legislatures also took aim at customer-service bots and potentially discriminatory outputs. Many state AGs continue to resist broad federal preemption, preferring to keep state authority intact.
Court systems weighed in too. The Arkansas Supreme Court now requires legal professionals to verify that AI tools don't retain or reuse confidential data, with misconduct risk for failures. New York and Pennsylvania issued similar guidance to limit uses that could compromise client confidentiality or judicial integrity.
What this means for teams: Practical to-dos
- Legal
- Substantiate every AI claim. If you can't prove it with data, don't say it.
- Separate service delivery from model improvement. Get clear, affirmative consent for any secondary use.
- Refresh privacy notices: training, retention, sharing, and opt-outs must be explicit and consistent with actual practices.
- For chatbots, assess wiretap risk. Document party status, vendor roles, and whether data is used beyond serving the user.
- If minors are in scope, build COPPA compliance into product flows: age gating, parental consent, and data minimization.
- IT/Engineering
- Data segregation by purpose: runtime vs. analytics vs. model improvement. Enforce via flags and access controls.
- Implement "do-not-train" pathways, prompt and output logging with retention caps, and deletion pipelines tied to user requests.
- Minimize collection on chat and voice channels. Strip sensitive fields at ingestion; mask PII before any enrichment.
- Track data lineage and sources for training. Keep provenance records to answer discovery and audit questions.
- Add child-safety safeguards: keyword filters, refusal behavior, and dedicated storage policies for under-13 data.
- Product & Marketing
- Claims checklist before launch: capability, accuracy, limits, and failure modes. Include plain-language disclosures in-product.
- Make dual-use visible: if interactions improve models, state it and offer a real choice.
- For companion or wellness features, avoid therapeutic promises; include clear boundaries and human-support handoffs.
- Design for consent, not dark patterns. Default to least intrusive collection; make opt-out simple and persistent.
- Instrument feedback loops to catch hallucinations, bias, and harmful outputs early-and tie fixes to release criteria.
What to watch in 2026
- More chatbot interception suits testing the "third party" angle and dual-use data.
- Renewed COPPA scrutiny for AI companions and youth-facing features.
- FTC actions on unproven performance claims and undisclosed data uses.
- Class actions against data brokers and apps that blend offline and online data for training or scoring.
- State deepfake laws driving takedown and notice requirements for platforms and tooling.
Helpful resources
- FTC inquiry into AI companion chatbots and safeguards for minors
- NCSL overview of state activity on AI and deepfakes
Skill up your team
If you're building or reviewing AI products, a shared baseline across Legal, Product, and Engineering shortens cycles and reduces risk. For structured upskilling by role, see AI courses by job.
Your membership also unlocks: