Grammarly's "Expert Review" Sparks Lawsuit Over Writers' Likeness Rights
Grammarly is being sued over its new AI feature, "Expert Review," which served up real-time writing tips styled as guidance from well-known authors, journalists, and public figures-without their consent, according to the complaint.
The suit was filed as a federal class action in the Southern District of New York and centers on alleged misuse of names and identities of hundreds of professionals to sell the tool. Writer and editor Julia Angwin, founder of The Markup, is a named plaintiff. "I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise," she said.
Celebrity names like Stephen King and Neil deGrasse Tyson reportedly appeared as "experts" inside the product. The company has since said on LinkedIn that it plans to phase out Expert Review, noting that scrutiny helps improve its products.
Why this matters to working writers
- Your name and reputation are your asset. If a tool implies you endorse its advice or have lent your voice to it, that can mislead readers and clients-and dilute your brand.
- AI "expert personas" can replace paid editorial work with a synthetic version of your craft. That's a double hit: loss of income and loss of identity control.
- This dispute is part of a broader trend: deepfakes, voice clones, and chatbots that trade on real people's likenesses. Consent and compensation are becoming the line in the sand.
The legal angle (plain English)
- Right of publicity: Many states protect against commercial use of your name, likeness, or persona without permission.
- False endorsement: If a product makes it seem like you endorse it, that may raise issues under advertising and consumer protection laws. See the FTC's Endorsement Guides for context: FTC Endorsement Guides.
- Copyright and training data are separate issues. This case spotlights identity misuse, not just text scraping.
Action plan for writers: Protect your name, voice, and audience
1) Lock down your identity
- Set up alerts for your name + "AI," "expert mode," "persona," and "writing assistant." Check product galleries, demo videos, and docs for unauthorized use.
- Standardize a public NIL (name, image, likeness) statement on your site: "No AI tool may use my name, voice, image, or likeness without written consent."
- If your byline is a brand, consider trademark guidance with a qualified attorney to strengthen enforcement options.
2) Update your contracts and pitches
- Add clauses that prohibit training, simulation, or use of your name/likeness in products, demos, or marketing without prior written consent and paid licensing.
- Require attribution accuracy: no "in the style of [Your Name]" or "expert advice by [Your Name]" language unless explicitly licensed.
- Include audit and takedown terms for misuse, with clear timelines and penalties.
3) If you find misuse
- Preserve evidence: screenshots, URLs, timestamps, and any marketing materials.
- Send a concise demand: identify the misuse, request immediate removal, and ask for details on data sources, distribution, and revenue tied to your name.
- File platform complaints where the feature or marketing appears. Many have policies against deceptive endorsements.
- Consult an attorney about right-of-publicity and false endorsement claims, especially if there's commercial gain tied to your identity.
- Consider joining or initiating collective actions if the scope is broad and evidence is strong.
4) Guard your editorial business
- Differentiate your services: strategy, taste, and editorial judgment that a synthetic persona can't credibly replicate. Make that value explicit in your proposals.
- Offer licensed, clearly scoped "AI-safe" packages where you control attribution and boundaries. Spell out what data, if any, can be used and how.
- Educate clients: unauthorized "expert personas" carry legal risk. Help them set sensible AI policies with consent, attribution, and audits.
What this signals about AI and creative work
AI isn't just remixing text-it's remixing identity. The line between inspiration and impersonation gets crossed the moment a product sells your name as a feature without your say.
Consent, compensation, and clear attribution aren't "nice to have." They're table stakes if AI companies want trust from the creative class. Phasing out a feature after backlash is reactive; building with permissions up front is the path that respects careers-and reduces legal exposure.
Stay prepared, not paranoid
- Get practical with your toolkit and workflows: AI for Writers
- Track policy, rights, and compliance issues that affect your byline: AI for Legal
This case will evolve, but the takeaway is already clear: your name isn't a dataset. Treat it like IP, defend it in your paperwork, and make companies earn the right to use it-with your consent and on your terms.
Your membership also unlocks: