When AI Detection Tools Flag Human Writing
A communications professional submitted an op-ed to a major Canadian publication. The editor approved the topic. Then the piece was flagged by an AI detection tool and rejected-not because of the argument, but because the software suspected it was machine-generated.
The writer knew the accusation was wrong. She had written the piece collaboratively with her client during a video call, building it in real time: brainstorming, revising, cutting what didn't work. It was messy and human.
She's not alone. Others in communications report similar rejections. The pattern is consistent: editors use AI detection software to screen submissions, and well-written pieces sometimes fail those checks.
Why editors are screening for AI
The pressure is real. As generative AI tools become standard, publications face genuine pressure to verify authenticity. Readers expect human-written content. Editors need to protect their credibility.
But the tools themselves are imperfect. They flag certain writing patterns-clean structure, specific punctuation, particular phrasing-as signals of AI generation. The problem: good human writing often displays those same patterns.
The collaborative writing problem
In agency and in-house communications work, writing is rarely solitary. Ideas come from brainstorms. Drafts get passed around. Multiple people reshape the piece. Each revision tightens the prose.
That process produces polished, structured writing. Which is exactly what AI detection tools are trained to flag.
Writers now face an uncomfortable choice: keep writing well and risk detection, or adjust their style to sound less finished. Some consider dropping em dashes or changing sentence structure to appear more "human."
The detection paradox
One of the main ways publishers check for AI-written content is to run it through AI detection software. That irony isn't lost on anyone working in this space.
There's no clear rule book. Editors make judgment calls without certainty. Writers second-guess whether their work is too polished. Both sides are operating in ambiguity.
What authenticity actually means now
The fundamentals of good communication haven't changed. Writers still need to be credible, offer a unique perspective, and give editors something they can publish with confidence.
But the definition of authenticity has become muddled. If collaborative human writing can be mistaken for machine output, how do you prove your work is genuine?
The answer isn't obvious. Generative AI tools are new. Detection tools are new. Most people are still figuring out how to use them responsibly-both the generation side and the detection side.
For now, the industry is operating in grey area. Understanding how these tools work and their limitations matters more than ever for PR and communications professionals. Consider exploring AI for PR & Communications to build clearer judgment about where these tools fit in your work.
The balance will come eventually. Until then, expect friction between the need for authenticity and the imperfect tools designed to verify it.
Your membership also unlocks: