Grammarly Added My Name to Its AI Editor Without Permission
Grammarly launched a feature called "expert review" that attributes AI-generated writing advice to real people-journalists, authors, and researchers-without asking their permission or paying them. The company says it will now let those people opt out.
The feature, which launched in August 2025, uses names like Stephen King, Neil deGrasse Tyson, and Carl Sagan to make its editing suggestions sound authoritative. In reality, the advice comes from AI trained on these people's published work. A disclaimer buried deep in Grammarly's support documentation says the expert references are "for informational purposes only" and don't indicate endorsement.
The problem: most users will never see that disclaimer. The feature displays expert names in blue text next to suggestions, designed to look like direct attribution.
Who's Being Used
The list of people Grammarly named without consent includes tech journalists, researchers who study AI ethics, and a Pulitzer Prize-winning investigative reporter. Among them: Timnit Gebru, an AI ethicist who has been vocal about the harms of AI development; Julia Angwin, who investigates how tech systems erode privacy; and John Carreyrou, who exposed the Theranos fraud.
When asked about her AI stand-in, journalist Kara Swisher responded: "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck."
The descriptions for some experts contain outdated job titles that could have been corrected if Grammarly had simply asked permission.
The Irony
Grammarly attributed advice to people whose life work contradicts what the company is doing. Shoshana Zuboff popularized the term "surveillance capitalism" and has written extensively about extractive tech practices. Wardle researches how to make communities resistant to propaganda. Neither would likely endorse having their identities used to market a subscription product.
The fake John Carreyrou suggested adding "sensory imagery - a chant echoing off glass towers or chalk dust in the air" to a draft. Generic advice, laundered through a respected journalist's name to make it sound valuable.
Why Grammarly Did This
Writing assistants are now commodity features. Anyone with access to ChatGPT, Claude, or Gemini can get editing advice. Grammarly, which launched in 2009 as a standalone tool, faces an existential problem: its core product is obsolete.
The company has tried to adapt by acquiring other tools. It bought Coda in 2024 and Superhuman last year, then rebranded itself as "Superhuman," an "AI-native productivity platform." The valuation has crashed from $13 billion in 2021 to something less optimistic.
Attaching real names to AI suggestions is a deliberate choice to make generic advice sound authoritative-and to monetize people's identities without involvement or compensation.
The Bigger Problem
Grammarly's approach is more brazen than what other AI companies do, but not fundamentally different. If you paste a draft into ChatGPT and ask it to "edit this the way Casey Newton would," the chatbot will oblige without asking permission or offering payment.
The distinction matters: Grammarly took a latent capability that exists in every large language model and turned it into a paid product feature. It curated a list of real people, let its models generate plausible-sounding advice on their behalf, and put it behind a $144 annual subscription.
But the underlying issue applies to all these systems. Most published work by writers-including journalists, authors, and researchers-is already inside these models, shaping outputs in ways they never agreed to and will never fully understand.
Grammarly just had the bad manners to put names on it.
The Opt-Out
Grammarly said it will allow experts to opt out by emailing expertoptout@superhuman.com. The company also said it wants to "improve Expert Review to deliver" better outcomes for both users and experts.
That's not a solution. It shifts responsibility to the people whose identities were already used without consent. They now have to discover they're in the feature, find the opt-out email, and request removal-reactive rather than proactive.
For writers concerned about how AI systems use their work, this is a useful case study in what happens when companies prioritize growth over permission. Learn more about AI tools for writers and how to evaluate them critically. You might also explore prompt engineering for better AI editing as an alternative to paid writing assistants.
Your membership also unlocks: