Five minutes of training boosts detection of AI-generated faces, study shows

Five minutes of training helped people spot StyleGAN3 faces, boosting accuracy from ~31-41% to 51-64%. Pair it with detectors to strengthen KYC and fraud checks.

Categorized in: AI News Science and Research
Published on: Dec 26, 2025
Five minutes of training boosts detection of AI-generated faces, study shows

Five minutes of training makes AI-generated faces easier to spot

A new study in Royal Society Open Science shows a simple intervention works: five minutes of targeted training boosted people's accuracy at detecting AI-generated faces created with StyleGAN3.

Across 664 participants, even those with exceptional face-recognition abilities struggled with untrained detection. After a short briefing, both high-ability "super-recognizers" and typical observers improved-meaning quick, practical training can move the needle for real teams.

The gist

  • Untrained detection of AI faces: super-recognizers 41%, typical observers 31% accuracy.
  • After a brief training: super-recognizers 64%, typical observers 51% accuracy.
  • Stimuli came from StyleGAN3, a strong generator at the time-making the task meaningfully hard.

What the training actually teaches your eye

  • Teeth and mouths: misalignment, odd spacing, unnatural gum lines.
  • Ears and earrings: shape mismatches, asymmetry, earrings that don't mirror correctly.
  • Hair and hairlines: inconsistent strands, "melted" edges, abrupt transitions.
  • Global coherence: tiny discontinuities where elements meet (glasses, jawlines, collars).

Participants were shown examples of these rendering artifacts and had them highlighted. That quick calibration improved signal detection for both groups.

Why this matters for research and security teams

AI faces are being used to seed fake profiles, skirt KYC checks, and support document fraud. Human-only screening won't eliminate risk, but a short, repeatable training block can lift baseline performance-useful when combined with automated detectors and secondary identity checks.

Key takeaways for implementation

  • Run a 5-10 minute micro-training before reviews: show artifact exemplars with brief feedback.
  • Pair trained reviewers with an automated detector; escalate only when both agree or when confidence is low.
  • Track accuracy and false positives over time; add monthly refreshers to maintain the effect.
  • Update exemplars as generators change. What worked for StyleGAN3 may shift with newer models.
  • Use super-recognizers strategically (triage, second pass) to maximize their added value.

Limitations to keep in mind

  • Even after training, accuracy was 51-64%. This is helpful-but not a standalone gate.
  • Generalization to other models and image types needs verification; effects over time remain to be tested.

Researchers involved note the security implications are real and growing. The practical upside: low-friction training can be deployed quickly and measured, giving teams a defensible, data-backed improvement without heavy tooling changes.

Source and further reading

If you're building team capability

For structured AI literacy and detection upskilling across roles, see curated options here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide