AI-Washing on Trial: SEC Probes and Class Actions Draw the Line Between AI and Artificial Value

AI-washing suits and SEC probes are surging as courts test overstated AI claims. Keep disclosures precise, provable, and consistent, or expect scrutiny and price-impact fights.

Categorized in: AI News Legal
Published on: Sep 17, 2025
AI-Washing on Trial: SEC Probes and Class Actions Draw the Line Between AI and Artificial Value

AI-Washing Securities Litigation: Where Claims About AI End and Artificial Value Begins

Class actions and SEC probes tied to "AI washing" are accelerating. The theme is simple: companies overstating AI capabilities to pump stock prices or raise capital. As of September 4, 2025, what is, or is not, AI washing is being tested in multiple federal courts.

For legal teams, the risk now sits at the intersection of disclosure controls, marketing oversight, and technical verification. This isn't about banning hype. It's about keeping claims precise, provable, and consistent across filings and the street.

What plaintiffs are alleging

  • Material misstatements or omissions about AI products, data, partnerships, or revenue impact.
  • False present-tense claims (e.g., "deployed at scale," "predicts with 95% accuracy") that later prove untrue.
  • Scienter inferred from internal documents, expert memos, or product dashboards showing performance gaps.
  • Loss causation tied to a corrective disclosure, guidance cut, or product delay revealing the truth.

How courts are likely to draw the line

  • Actionable specifics vs. puffery: Concrete claims (benchmarks, customer counts, live deployment) are risky if unverifiable or misleading. Vague superlatives are less likely to be actionable.
  • Present fact vs. forward-looking: PSLRA safe harbor won't protect false statements about current capabilities.
  • Omnibus "AI-powered" claims: If the "AI" is a prototype, a rule-based system, or vendor-provided with minimal in-house value-add, broad claims invite scrutiny.
  • Attribution and reliance: Stock moves tied to AI headlines or corrections strengthen price impact arguments.

Regulatory posture you should expect

  • SEC enforcement on marketing and disclosure around AI tools, data sources, and model performance.
  • Focus on whether claims match internal testing, customer adoption, and risk disclosures.
  • Heightened attention to boards' oversight of AI statements across earnings calls, investor decks, and social posts.

Reference actions: the SEC charged two advisers in 2024 for "AI washing" based on exaggerated AI claims. See SEC release for details: SEC charges for AI-washing (2024). The FTC has also warned against exaggerated AI marketing claims: FTC guidance on AI claims.

Disclosure touchpoints under Reg S-K

  • Item 101 (Business): Describe AI products realistically, including whether they are in pilot, limited release, or at-scale deployment.
  • Item 105 (Risk Factors): Avoid boilerplate. Address data quality, third-party model dependence, scaling limits, accuracy variance, and regulatory risk.
  • Item 303 (MD&A): If AI materially affects revenue, margins, capex, or R&D, explain the drivers and constraints. If it doesn't yet, don't imply otherwise.

Red flags that look like AI washing

  • "AI-driven" branding with no clear model description, training data, or deployment context.
  • Stated accuracy or ROI with no methodology, sample size, or baseline.
  • Claims of "industry-leading" models with no peer benchmarks or third-party validation.
  • Inflated "customers using AI" counts that include pilots, trials, or non-paying users without saying so.
  • Implied proprietary tech that is mostly a third-party API with minimal differentiation.

Practical defenses you can preserve now

  • Puffery defense: Separate opinion or aspirational marketing from verifiable present facts.
  • Cautionary language: Use specific, not generic, risk disclosures that match known limits and data constraints.
  • No price impact: Preserve records to run an event study isolating AI claim effects on price.
  • No scienter: Maintain audit trails showing diligence, cross-functional review, and reasonable belief at the time.

Controls counsel should implement

  • Single source of truth: Maintain an internal AI claims registry with versioned statements, evidence, and owners.
  • Substantiation files: For every quantitative claim (accuracy, speed, cost), keep test protocols, datasets, and replication steps.
  • Cross-functional review: Legal, data science, product, and IR must sign off on AI claims in filings, earnings scripts, and marketing.
  • Model lifecycle tracking: Document stage (concept, pilot, limited release, production), guardrails, and known failure modes.
  • Third-party dependencies: Disclose reliance on external models or datasets where material; verify vendor claims independently.
  • Change management: If a model or dataset changes, trigger re-review of prior claims and risk factors.
  • Social media governance: Bring executive posts and paid content under disclosure controls.

Underwriting, M&A, and financing diligence

  • Request model cards, validation reports, and customer adoption metrics segmented by pilot vs. production.
  • Reconcile investor presentations with product telemetry and revenue data.
  • Include reps/warranties on AI capabilities, data rights, evaluation results, and pending regulatory inquiries.

Litigation checklist for defense counsel

  • Map each statement to evidence at the time made; label as opinion, forward-looking, or present fact.
  • Collect internal testing, third-party validations, and customer testimonials tied to the relevant period.
  • Run event studies isolating AI-related disclosures from market or sector noise.
  • Assess puffery, safe harbor, and bespeaks caution arguments early; consider motion to dismiss strategy focused on specificity and materiality.

How to write safer AI disclosures

  • Replace "AI-powered" with plain descriptions: the model type, task, scope limits, and deployment status.
  • Use ranges and caveats for performance claims; state datasets and conditions.
  • Separate aspiration from fact: goals belong in forward-looking sections with meaningful cautionary language.
  • Quantify materiality thresholds for AI's revenue, margin, or cost impact-or say it's not material yet.

Action items for GCs and compliance leads

  • Stand up an AI disclosure committee that meets before every filing and earnings call.
  • Inventory every public AI claim; withdraw or revise any without support.
  • Institute preclearance for AI statements by executives, sales, and marketing.
  • Train spokespeople on how to describe models, limits, and uncertainty.
  • Monitor analyst notes and media to correct misimpressions tied to your statements.
  • Document the process; it's your defense on scienter.

Bottom line

Courts are testing how far companies can go in describing AI. The safe zone is specific, supported, and consistent. If a statement would surprise a reasonable investor when the full context is revealed, don't say it-or disclose the context.

If your teams need baseline AI fluency to make stronger disclosures and avoid overreach, consider structured training resources that explain terms, limits, and use cases in plain English: Latest AI courses.