A Writer Used AI to Shape Her New York Times Essay. Now the Literary World Is Arguing About What That Means.
Kate Gilgan's personal essay about losing custody of her son during her alcoholism appeared in the New York Times' "Modern Love" column in October. Last month, another writer accused her of using AI to write it-without presenting evidence, only pointing to the essay's style and running it through unreliable AI detection tools. Gilgan didn't initially know about the controversy. She doesn't use Twitter.
When journalists asked her about the allegations, Gilgan said clearly: "AI wasn't used to generate that content."
That answer turned out to be incomplete. She had used ChatGPT, Claude, Copilot, and Perplexity throughout the writing process. She denied copying and pasting AI-generated text directly into her essay, but acknowledged the tools shaped her work in other ways.
The situation is messier than either accusation or denial suggests. Readers were right to notice something unusual in the writing-AI did play a prominent role. Yet the accusations themselves were initially baseless. Within two weeks of Gilgan's controversy, a major publisher pulled a horror novel over suspected AI use, and the New York Times cut ties with a book critic after discovering his AI-assisted review plagiarized substantial portions of text.
How Gilgan Actually Used AI
Gilgan started working on the custody story 15 years ago as a memoir. The draft failed. "It was so full of self-pity and histrionic emotional grandeur; it was just awful," she said. She abandoned it.
Years later, she decided to approach the same material as a novel. That gave her "more freedom." She finished the first draft about a year ago, then extracted the essay from that work. Her strategy was deliberate: publish the essay in the Times to help market the novel.
To craft an essay she thought would appeal to the "Modern Love" editors, Gilgan turned to chatbots. She had been experimenting with them for about two years. "Rather than sitting on Google reading through tons of other people's articles about how to get published in 'Modern Love,'" she said, "I asked AI, 'Okay, boil this down for me.'"
She used whatever was available-ChatGPT on one laptop, Copilot on another. She didn't have a preference. The tools served as what she called a "first reader."
Gilgan ran her writing through the chatbots repeatedly, asking questions about structure and tone. "One of the bits of feedback I got from AI was, 'Okay, you're going to have to really focus on a tight story arc,'" she said. She then rewrote sections based on that feedback.
She asked the tools whether her language sounded "too histrionic" or whether she was unfairly blaming her ex-husband. "I used it to help me stay rational and unemotional about a really emotional topic," she said.
The distinction she makes is critical to her defense: AI didn't generate new ideas or sentences that she pasted into the essay. It functioned as editorial feedback, much like talking to her Alcoholics Anonymous sponsor or a human editor.
The Disclosure Question
Gilgan's comparison to human editors is where the argument becomes sharp. An editor might rephrase a sentence or suggest different wording. But a chatbot can rewrite entire passages, restructure arguments, and alter tone in ways traditional editorial tools cannot.
When asked if she believed AI significantly altered her voice, Gilgan laughed. "I'm just a technically proficient writer," she said. Her style has matured since 2017, but AI didn't fundamentally transform it.
On the disclosure issue, she was direct: "How much was AI used? Did it generate content? My direct answer to that question is: no more so than an editor would generate content for me."
The New York Times declined to comment on Gilgan specifically but said in March that journalism at the paper "is inherently a human endeavor" and "that will not change."
Where Writers Draw the Line
Gilgan initially described AI as simply another "tool" in her workflow, comparing it to using a computer instead of a typewriter. When pressed that chatbots function differently than traditional tools-capable of generating and transforming text in unprecedented ways-she acknowledged the risk.
"Is there a risk with AI? Absolutely," she said. "If I want to be lazy about my writing, yeah-AI could do it all for me."
But she insisted she won't reach that point. "For the sake of my own sense of integrity, I hope I don't ever get that lazy that I just hand it over to AI."
The Gilgan case has exposed a central tension for writers working with AI: the technology blurs traditional boundaries between research, editing, and creation. No clear industry standard exists for disclosure. No consensus has formed on what constitutes acceptable use.
What is clear: literary institutions are still figuring out their policies as the practice becomes more common.
Your membership also unlocks: