Superhuman disables Grammarly Expert Review after cloning backlash, promises opt-in for experts

Superhuman halted 'Expert Review' after backlash for mimicking writers and using names without consent. The company apologized and will relaunch it as opt-in with better control.

Categorized in: AI News Writers
Published on: Mar 12, 2026
Superhuman disables Grammarly Expert Review after cloning backlash, promises opt-in for experts

Superhuman pauses "Expert Review" after backlash over AI imitating writers without consent

Superhuman has disabled its "Expert Review" feature - an AI assistant that surfaced edits "inspired by" the published work of named writers - after pushback from experts who said the tool misrepresented their voice and used their reputation without permission.

"We clearly missed the mark," said Ailian Gan, Superhuman's director of product management, adding the team will "reimagine" the feature so experts can choose how they are represented - or opt out entirely.

Initially, Superhuman offered an email inbox to opt out. After more feedback, the company shut the feature off and committed to building an opt-in model.

CEO Shishir Mehrotra apologized and said the goal is a future where "experts choose to participate, shape how their knowledge is represented, and control their business model." The company says the agent drew on publicly available information from third-party LLMs to generate suggestions "inspired by" influential voices.

What this means for writers

  • Your voice is your asset. Tools that imitate "style" trade on your identity and audience trust, even if the text is "new."
  • Consent beats opt-out. An inbox to object is not control. Opt-in with clear terms is the bar.
  • Attribution isn't enough. "Inspired by [Your Name]" still implies endorsement and can blur your reputation if outputs miss the mark.

Protect your voice right now

  • Publish a use policy on your site. State whether your work may be used for AI training, voice modeling, or "inspiration." Include licensing terms and contact info.
  • Update your contracts. Add a "no AI training/voice modeling" clause by default. If a client requests those rights, price them separately and set limits (scope, duration, revocation).
  • Monitor your name. Set alerts for "[Your Name] writing style," "[Your Name] inspired," and product mentions. Screenshot misuses and log dates.
  • Ask platforms for specifics. If a tool uses your name or likeness, request the model source, data origin, how suggestions are labeled, and how you can remove access.
  • Use provenance tools. Add Content Credentials (C2PA) where possible so your bylined work carries a verifiable history. Many publishers and tools are starting to support it. See industry guidance from groups like the Partnership on AI.
  • Write a short "style disclaimer." A page that says "Do not imitate or market outputs as 'inspired by' my name without consent" gives you a public reference when you file takedowns.

If you consider opting in later

  • Consent: Explicit, written, and revocable.
  • Control: Final say on how your name appears, where it appears, and the right to pull it anytime.
  • Compensation: Clear revenue share or licensing fee, not vague "exposure."
  • Context: Transparent labeling so users know what is modeled, what isn't, and what the limits are.
  • Care: Guardrails to prevent harmful or misleading imitations using your name.

The bigger signal

This pause is a sign that default imitation without consent won't fly. Platforms will have to move to expert opt-in, transparent labeling, and real compensation - or they'll lose trust with the very people their products depend on.

Writers don't need to reject AI. We need agreements that protect voice, align incentives, and keep readers clear on who actually wrote the words.

Next steps and resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)