AI study clarifies why super-recognisers are so good at identifying faces
Super-recognisers help find suspects and connect identities most people miss. A new study explains their edge: it's less about looking everywhere and more about sampling the right visual information at the right time.
Using deep neural networks (DNNs) and eye-tracking, researchers showed that gaze strategy alone can boost face-matching performance. In short, what you choose to look at matters as much as what your brain does with it.
How the study worked
The team analysed eye-tracking data from 37 super-recognisers and 68 typical recognisers. Participants viewed full faces and faces where only the currently fixated area was partly visible.
Researchers reconstructed the "retinal" image at each fixation-what the eyes actually took in-and fed that into DNNs trained for face recognition. The AI then compared this partial input against either the same face or a different one and produced a similarity score.
Key findings
- Performance rose as more of the face became visible-expected across all groups.
- At every visibility level, AI driven by super-recognisers' retinal samples scored higher than AI driven by typical recognisers' samples.
- Crucially, even when the total amount of sampled facial area was matched, super-recogniser sampling still led to better AI performance.
- Their advantage wasn't about quantity. It was about selecting regions with more identity information per "pixel."
What this means for science and engineering
The results point to active information sampling as a major source of individual differences in face recognition. It's not purely downstream neural processing-front-end visual selection plays a big role.
For researchers and engineers, this suggests two practical levers: model architecture and data acquisition. Better sampling strategies (human or simulated) can lift performance without changing the core recogniser.
Practical takeaways for your next study or system
- Instrument your experiments with eye-tracking and analyse information value at each fixation. Compare participants' sampling maps to super-recogniser-like patterns.
- In silico, replicate retinal sampling by masking inputs according to fixation windows. Score recognition under matched visibility to isolate sampling quality from quantity.
- Stress-test algorithms under partial visibility, occlusions, and off-axis views. Evaluate whether attention mechanisms discover high-value regions similar to human experts.
- Be cautious about hard-coding "universal" regions. The most useful cues can shift with identity, pose, lighting, and occlusion-just as expert humans adapt their gaze.
Limits and open questions
These results come from still images and controlled lab conditions. We need to see whether the same sampling advantage holds with video, crowd footage, head motion, and real-world clutter.
Can typical viewers be trained into better sampling strategies? It's unclear. Some evidence points to a genetic and heritable basis for super-recognition, which could cap trainability.
Why it matters beyond face recognition
Sampling-first thinking generalises. In any perception task, performance hinges on which information you collect before processing begins. Better inputs shrink the problem for your model or your brain.
If you build human-AI systems, consider coupling recognition models with gaze policies or attention priors that privilege high-yield regions. This can act like free signal-to-noise gain.
Further reading
For context and related research, see the journal Proceedings of the Royal Society B.
Try the test
The researchers have released a free assessment to help identify super-recognisers (the UNSW Face Test). Useful as a screening tool, though it's not a guarantee of real-world performance.
Build your AI skill stack
If you're applying these ideas in your lab or product team and want structured learning paths, explore curated AI courses for specific roles: AI courses by job.
Your membership also unlocks: