Grammarly pulls Expert Review after authors blast AI for using their names

Grammarly disabled its 'Expert Review' after writers balked at their names and voices mimicked without consent. Expect a pivot to citation-based guidance not famous personas.

Categorized in: AI News Writers
Published on: Mar 12, 2026
Grammarly pulls Expert Review after authors blast AI for using their names

Grammarly disables "Expert Review" after backlash from writers

Grammarly's generative feedback feature, Expert Review, is going dark. The company says it's disabling the tool while it reassesses how it works and how it uses real writers' names and voices.

Why the U-turn? Expert Review generated feedback that appeared to come from specific writers, academics, and public figures-living and dead-without their permission. It leaned on "publicly available information from third-party LLMs," which likely means models trained on scraped web data.

What happened

Expert Review launched in August with a simple pitch: pick an "expert," get feedback in their style, and level up your draft. The system suggested names based on topic-everyone from scientists to bestselling authors to tech bloggers.

A disclaimer tried to soften the blow: the experts shown didn't endorse the service. But once writers noticed their names and personas were being invoked, the reaction was swift. A class action is now pending against the company.

The initial fix was an opt-out. You can guess how useful that was for the deceased and for living writers who never saw the announcement.

Today, the company said it's disabling Expert Review while it rethinks the feature. As the CEO put it, the agent was meant to "help users discover influential perspectives and scholarship… while also providing meaningful ways for experts to build deeper relationships with their fans." Intent aside, execution crossed a clear line for a lot of us.

Why writers care

Your name isn't just a label-it carries reputation, reader trust, and commercial value. When a tool imitates your voice and attributes feedback to "you," it risks false endorsement and brand dilution, even if a disclaimer lives in the fine print.

There's also the consent gap. Many writers don't know their work was used to teach the very systems now mimicking them. An opt-out after the fact doesn't fix that.

What this signals for AI tools

  • Voice and likeness are hot zones. "Inspired by" is one thing; "presented as" is another.
  • Disclaimers won't shield a product from writer pushback-or potential legal exposure under false endorsement or right of publicity theories.
  • Expect a shift from name-based "expert personas" to topic- or style-based guidance without real identities attached.

Actionable steps for your byline and voice

  • Set alerts: Create Google Alerts for your name, pen name, and book titles + "AI," "feedback," "persona," "expert." Catch misuse early.
  • Publish a clear policy: On your site, state your position on AI training, voice cloning, and name/likeness use. Make consent terms explicit.
  • Tighten contracts: Add clauses that ban AI training on your drafts and prohibit using your name or likeness in AI outputs without written consent.
  • Centralize licensing: If you allow any AI use, define scope, duration, compensation, and audit rights. Vagueness favors the platform, not you.
  • Protect the brand: Consider trademarking a pen name used commercially. It adds leverage if misuse crops up in marketing or product features.
  • Document everything: Screenshots, timestamps, and output samples matter if you need a takedown or legal remedy.

If your name was used

  • Ask for confirmation: Request records of where and how your name/works were used, and demand removal from all models and marketing.
  • Send a formal notice: A concise letter citing potential false endorsement and right-of-publicity concerns often gets faster action than a tweet thread. The FTC's endorsement guides outline why implied endorsements are sensitive.
  • Evaluate claims: Depending on your jurisdiction, right of publicity and unfair competition laws may apply. Speak with counsel for specifics.

What to expect next

Disabling the feature is a pause, not an endpoint. The likely path forward is AI feedback that cites sources, not celebrities-think footnotes, not facsimiles. That's better for readers and safer for writers.

The takeaway: consent and attribution aren't "nice-to-haves." They're table stakes. If a tool wants your voice, it should ask first-and pay fairly.

Want practical guidance on working with AI without giving up your voice?

Explore AI for Writers for courses and tactics that help you use AI on your terms-ethically, profitably, and with your byline intact.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)