FTC Cracks Down on Inflated AI Claims: 5 Lessons for Credible Marketing
FTC puts AI sellers on notice after the Workado case: big claims need big proof. Test broadly, keep evidence, and state limits, or face enforcement and lost trust.

Government Puts AI Companies on Notice About Boastful Advertising: 5 Practical Lessons for the Tech Sector
Insights 9.11.25
The wakeup call
The FTC just sent a clear signal to AI vendors: if you make big claims, you need big evidence. A final consent order on August 28 against Palo Alto-based Workado shows that long-standing ad rules apply to AI as much as anything else.
Workado marketed an AI detector it said was 98% accurate at flagging AI-generated text. The promise was attractive to educators, publishers, and enterprises. The problem: the claim didn't hold up outside narrow conditions.
What the FTC found
- Training-data mismatch: The model was trained mainly on academic writing, despite being promoted for broad online content like blogs and marketing copy.
- Real-world performance collapse: Outside academic contexts, accuracy fell to roughly 53% - "no better than a coin toss."
- Material overstatement: Marketing overstated capabilities and misled customers about reliability and performance.
What the order requires
- Stop unsupported accuracy claims: No "accuracy" or effectiveness claims without competent and reliable evidence at the time of the statement.
- Retain test data and evidence: Keep documentation of the testing and analysis that back performance claims.
- Notify customers: Send an FTC-drafted notice explaining the issue and the settlement.
- Report to the FTC: Provide annual compliance reports for four years.
Why this matters to government and marketing teams
Government buyers and oversight bodies will press for proof, not promises. Marketers face higher scrutiny on quantified claims, especially "accuracy," "detection," and "bias" metrics.
Overclaiming risks regulatory action, lost trust, and procurement setbacks. The fix is simple: align your story with your data, and keep an evidence file on hand.
5 practical lessons for AI companies
- 1. Test broadly, not narrowly. If the use cases span education, media, and enterprise, your test sets should too. Performance that looks strong on essays may crater on social posts and sales copy.
- 2. Don't let marketing outrun data science. Set a cross-functional review so technical leads vet every claim before it ships. If you can't prove it, don't print it.
- 3. Build an evidence file. Keep training sets, validation methods, error rates, and known limitations. Store versioned PDFs or pages that tie each public claim to its supporting proof.
- 4. Acknowledge limits. Clear, honest caveats increase credibility. Example: "Best on structured documents (contracts, policies); lower accuracy on informal text."
- 5. Make compliance a habit. Add routine audits, claim version control, and a rule that no metric goes live without validation and date-stamped support.
Action checklist for your next AI campaign
- Define approved, test-backed claims with confidence intervals and date stamps.
- Mirror your customer mix in benchmarks; include edge cases and messy real data.
- Document known failure modes and publish plain-language limitations.
- Set up a claims registry (what was said, where it appears, supporting evidence, owner).
- Implement change control: updates to models trigger updates to claims.
- Prewrite a customer notice template in case corrections are needed.
Helpful resources
Skill up your team
If your marketing or procurement teams need deeper fluency in evaluating AI products and claims, explore focused learning tracks and certifications:
Bottom line: Bold AI claims without proof invite enforcement. Build your claims on data, keep receipts, and communicate limits with the same confidence you use for wins.