The Case for Disclosing AI Use in Writing
Washington Post columnist Megan McArdle faced backlash for disclosing that she uses AI to transcribe interviews, analyze arguments, and fact-check her work. A Rutgers philosophy professor suggested she should be fired. The criticism raises a practical question for working writers: what's actually wrong with using these tools?
The arguments against McArdle's approach apply equally to technologies writers already use without controversy. Spell-check and Google searches do the same work-finding grammatical errors and locating information. No one objects to those tools. Why should a platform combining them be treated differently?
The Hallucination Problem Has a Simple Fix
Critics point to AI's tendency to generate false information. But this is an argument against relying solely on AI, not against using it at all. When a researcher cites a fake news website found through Google, we don't condemn search engines-we fault the researcher for not verifying the source.
Writers familiar with large language models know how to catch hallucinations: ask the AI for its source. It will typically admit when it fabricated something. The responsibility to verify remains with the writer, regardless of the tool.
Where the Real Objection Lies
Few people defend using an LLM to write an entire column, then publishing it under your name without disclosure. That's the actual ethical line. If your publication has rules against the practice, follow them. But the issue isn't the tool-it's transparency.
Consider a thought experiment: a young policy writer produces brilliant, factually accurate columns with no logical errors. Then he reveals every piece started as a ChatGPT draft shaped by his prompts. Should his work suddenly become worthless? Should he be excluded from public discourse?
The answer hinges on how we judge writing. Some writers produce a column in an hour. Others need days. Many couldn't write a publishable piece no matter how much time they had. Some have valuable ideas but lack strong writing skills.
We don't object when someone uses reading glasses to read research papers. We judge the final product-whether the information is accurate, interesting, and logically sound. Writing ability shouldn't be a gatekeeping mechanism for ideas worth hearing.
AI as an Equalizer
Established writers already use ghostwriters and research assistants. Wealthier authors have structural advantages. AI can level that playing field for those without resources or natural writing talent.
The deeper concerns about AI are legitimate. We can't outsource thinking to these tools any more than we should blindly accept the first Google search result. But efficiency gains have historically been positive, as long as standards remain high.
The standard that matters: Did the writer verify facts? Are the arguments coherent? Is the information useful? Judge the work by those criteria, not by whether an AI touched it.
With AI for Writers becoming standard practice, understanding Prompt Engineering helps writers use these tools effectively while maintaining editorial standards.
A more rational public discourse depends on including voices with valuable ideas, regardless of their writing speed or natural ability. Disclosure matters. Standards matter. The tool itself does not.
Your membership also unlocks: