AI Detectors vs. Your Draft: What Pangram Gets Right-and Wrong
Writers keep getting asked to "prove" a draft is human. That's why tools like Pangram exist. It markets itself as the AI detector that beats human experts, supports multiple languages, and plugs into Google Classroom and Chrome.
Behind the scenes, Pangram's model is trained on millions of human and AI examples. It compares your text with a "synthetic mirror" (an AI-generated twin of a human document), then makes a call only when it's confident. That's the pitch.
How Pangram works (quick take)
You get five free credits per day (roughly one credit per 500 words), with paid plans starting at $12.50/month. Paste or upload text, and Pangram scores it, highlights suspected AI passages, and explains why it thinks a section looks synthetic.
I ran three tests
1) Human-written article, with AI used only to sort quotes
Result: 13% AI.
What was flagged? Some of my most personal lines-and the chunk with direct quotes. The core draft was written by me; I just used AI to organize quotes from a transcript. My take: the tool likely picked up on structure and transitions around quotes, not "plagiarism."
2) A fully AI draft, written "in my voice"
Result: 99.9% AI.
Accurate call, but the explanation raised an eyebrow. Words like "creativity," "story," and "narrative" were cited as heavy AI tells. Those are everyday writing words. This hints at a bigger issue: frequency patterns can feel persuasive while missing context.
3) A published article I wrote, then "refined" by AI
Result: 99.3% AI.
The tool flagged common phrases and lines that originated in my human draft. It treated the whole thing as AI-written because the text passed through an AI editor-even though large parts were mine. Hybrids confuse detectors.
What this means for writers
AI detectors are probability engines. They're decent at catching fully synthetic drafts. They struggle with blended work, quotes, and standardized phrasing. The risk isn't just false negatives; it's confident false positives without real context.
Use the score as a signal, not a verdict. If a client, editor, or educator is making high-stakes calls off a single tool and a percent badge, that's a problem.
When Pangram is useful
- Bulk screening: triage a stack of submissions fast.
- Second opinion: a nudge to reread a suspicious section.
- Education workflows: flag likely AI for human review, not automatic penalties.
When to be skeptical
- Quoted material and transcripts: structure around quotes can look "AI."
- AI-refined human work: light edits may get labeled as fully synthetic.
- Generic vocabulary: common writing words are not proof of anything.
A simple review workflow (for editors and clients)
- Ask for process: outline, notes, sources, and version history (Docs history helps).
- Run one detector for triage, then read the flagged sections yourself.
- Compare with known samples from the writer: voice, cadence, specificity.
- Request revisions to flagged areas and a quick rationale for choices.
- Set a clear AI-use policy: what's allowed (research, outline, grammar), what requires disclosure, what is off-limits.
If you write with AI, avoid false positives
- Inject specificity: dates, numbers, lived experience, and sources.
- Vary rhythm: mix short and long sentences, use contractions, cut filler.
- Replace generic abstractions with concrete nouns and verbs.
- Keep receipts: drafts, notes, and version history to prove authorship.
- Handle quotes carefully: preserve context and add your own analysis.
- After any AI "refine," do a human pass that adds fresh detail and nuance.
Pricing and limits
Free trial gives five credits/day (about 500 words per credit). Paid plans start at $12.50/month. This can change, so check current pricing before you commit.
Should you try Pangram?
Yes, for screening and curiosity. No, for final judgments. The tool can spotlight likely AI, but it still leans on pattern recognition, which can miss intent and context. For high-stakes calls, combine detector output with editorial review, process evidence, and a clear policy.
Bottom line
Treat AI detection as a flashlight, not a hammer. If you write, focus on craft and proof of work. If you edit, use detectors to prioritize your attention-then let human judgment finish the job.
Want to build AI into your writing workflow without losing your voice? Explore curated, practical tools and training for writers at Complete AI Training.
Your membership also unlocks: