CTV op-ed sparks AI authorship allegation - and a bigger ethics test for writers
A CTV News opinion piece critical of Conservative leader Pierre Poilievre set off a flare-up on X after a TV host alleged the op-ed showed signs of AI authorship. The journalist who wrote it, Sharan Kaur, denied using AI and pushed back on the accusations.
Critics flagged the op-ed's rhythm and structure: recurring transitions, heavy em dashes, mirrored sentence lengths, and neat rhetorical contrasts. An AI detector app reportedly showed "100% confidence" the piece was AI-assisted, though such tools are known to misfire.
The op-ed opened with: "Canadian politics has never been for the faint of heart... But what's happening inside the Conservative Party under Pierre Poilievre isn't normal turbulence. It's institutional decay disguised as discipline." A later line - "this isn't how you build a credible opposition. It's how you run a cult" - also drew scrutiny.
Kaur denied using AI and called critics "racist" in a response on X. She also referenced another AI platform to question the reliability of the detection app. At this point, the claims remain unproven.
Why writers should pay attention
Whether or not AI was involved, this story taps into a real pressure point for writers: trust. Readers now scan for AI tells. Editors face new due diligence. And detection tools can label human prose as synthetic - especially when the writing leans on tidy symmetry and formulaic rhetoric.
If you write or edit for a living, your process and your receipts matter more than ever.
What critics cited as "AI tells"
- Recurring transitional phrasing across paragraphs
- Overuse of em dashes
- Rhetorical devices like antithesis and parallel constructions
- Symmetrical sentence lengths and tidy paragraph cadence
- Incomplete syllogisms and formulaic contrasts
None of these prove AI. They're common in human opinion writing. But stack enough of them together, and suspicion grows - especially online.
The limits of AI detectors
Detection tools are fallible. They can misclassify native writing styles, penalize non-native phrasing, and break when text is lightly edited. Treat them as a signal, not a verdict.
For deeper context, see research on detector bias from Stanford HAI here, and the SPJ Code of Ethics on transparency and accountability here.
Context: funding, trust, and public scrutiny
Bell Media, CTV's parent company, reportedly received millions in public subsidies via the Canadian Journalism Collective. When taxpayer money is involved, readers expect clear sourcing, original reporting, and straight answers on tool usage. That's the real issue surfaced by this spat.
Practical guardrails for writers
- Be clear with your editor about tool use. A one-line disclosure policy saves headaches later.
- Vary cadence. Mix sentence lengths. Break symmetry on purpose. Add lived detail that a model wouldn't know.
- Cite specifics: dates, documents, named sources, links. Rhetoric without receipts invites doubt.
- Keep drafts and notes. If questioned, you can show the work and the evolution of the piece.
- Run a human edit pass focused on voice, texture, and reporting adds. Think: scenes, quotes, numbers.
- If your newsroom uses detectors, pair them with human review. Tools can flag; humans decide.
Practical guardrails for editors
- Adopt a simple policy: where AI is allowed, where it isn't, and what must be disclosed.
- Audit high-stakes op-eds for sourcing and specificity, not just style tells.
- When challenged publicly, respond with process: how drafts are handled, what's checked, and by whom.
If you use AI, use it well
- Keep it to brainstorming, outlines, and early passes. Own the final prose.
- Rewrite for voice. Add reporting. Insert concrete details from your notes.
- Fact-check every claim. Verify quotes. Cross-check numbers.
- Note tool use when appropriate. Transparency builds trust and ends speculation fast.
A craft note on rhetoric
AI often writes in neat contrasts and mirrored rhythms. You can avoid that look by breaking patterns, inserting specific scenes, and letting the occasional imperfect sentence stand. Real voice has edges.
Bottom line
The current allegation is unresolved. The bigger takeaway for working writers: make your process defensible, your sourcing specific, and your voice unmistakably human. That's how you stay out of the crossfire and keep readers with you.
Further learning
If you're integrating AI into your writing workflow and want structured guidance that won't flatten your voice, explore these resources: AI tools for copywriting.
Your membership also unlocks: