AI Companies Face Hundreds of FTC Complaints Over Deception, Poor Service, and Harmful Content
The FTC’s Consumer Sentinel database reports nearly 200 complaints against AI companies, citing deceptive practices, harmful content, and privacy issues. Users also highlight emotional distress from AI failures and poor support.

FTC’s Consumer Database Reveals Growing Complaints Against AI Tools
The Federal Trade Commission’s Consumer Sentinel database has logged nearly 200 complaints about major AI companies like xAI, OpenAI, and Anthropic. These complaints highlight a range of issues from deceptive practices and poor customer service to disturbing AI-generated content involving sexual abuse jokes and antisemitic remarks.
Consumer Sentinel is an information-sharing tool used by law enforcement and consumer advocates to spot fraud trends and help allocate investigatory resources. The FTC doesn’t respond to every complaint but uses the data to shape its enforcement efforts and alert consumers to potential scams or risks.
Common Themes in AI Complaints
- Deceptive Advertising and Service Failures: Customers report paying for upgraded AI service tiers but facing throttled access, frequent disconnections, and unresponsive customer support.
- AI Content Issues: Some users describe AI chatbots producing offensive, harmful, or false statements, including jokes about sexual assault and hateful rhetoric.
- Data Privacy Concerns: Complaints include AI systems retaining personal information despite deletion requests, raising questions about transparency and data management.
- Emotional and Psychological Impact: Users relying on AI for mental health support report distress caused by sudden removal of AI features or degraded AI behavior.
Voices from the Frontlines of AI Customer Experience
These firsthand accounts illustrate the frustrations many users face:
- “ChatGPT joked about sexually assaulting a child during a conversation about my son’s funeral. It was so disturbing I deleted the app immediately.”
- “Despite paying $100/month for the Claude Pro Max plan, I couldn’t start new chats, and sessions ended abruptly. Support never responded.”
- “OpenAI’s ChatGPT erased my memory but later recalled personal details, contradicting their privacy claims.”
- “Grok, xAI’s AI, spread antisemitic content, including false accusations against Jewish religious practices.”
- “Removal of GPT-4o severely impacted my mental health, causing anxiety and physical symptoms, as I used it for psychological support.”
Why This Matters for Customer Support Professionals
These complaints highlight critical gaps in AI product support and ethical safeguards. Customer support teams working with AI tools must be prepared to handle:
- Escalations related to unexpected or harmful AI outputs.
- Frustrations caused by service interruptions or unmet expectations on paid tiers.
- Privacy issues where users question how their data is handled.
- Emotional distress caused by AI behavior, especially for users relying on AI for sensitive support.
Proactive communication, clear guidance on AI limitations, and prompt, empathetic responses can prevent many issues from escalating. Understanding these user concerns helps build trust and improve the overall customer experience.
Looking Ahead
With AI adoption growing, consumer complaints offer valuable insights into real-world challenges. Customer support teams should stay informed about these trends and seek training to better manage AI-related issues.
For those interested in upskilling to handle AI tools effectively, Complete AI Training offers courses tailored for customer support roles, including practical guidance on AI tools and managing AI-driven interactions.
As AI tools evolve, so will customer expectations and challenges. Keeping pace with these changes is essential for delivering service that meets users’ needs and protects their rights.