Radiologists' eye movements help Welsh team build smarter medical AI tools
Cardiff University and the University Hospital of Wales used radiologists' eye movements to guide AI on chest X-rays. When paired with existing models, this attention signal improved diagnostic performance by up to 1.5% and pushed machine behavior closer to expert judgment.
The study, published in IEEE Transactions on Neural Networks and Learning Systems, aims to support clinical decision making, build clinician trust, and speed up adoption where it helps most.
Why this matters for your service
Wales faces a 32% shortfall in consultant radiologists; the UK is at 29%, per the 2024 Royal College of Radiologists census. Demand for imaging keeps rising, and delays stack up.
Small, reliable gains add up across large volumes. If AI can focus where radiologists look, it can become a more useful assistant-not a black box.
What the team built
The researchers captured more than 100,000 eye movements from 13 radiologists reviewing under 200 chest X-rays. They used this to create the largest visual saliency dataset for chest radiographs to date and trained a model called CXRSalNet to predict the most clinically relevant regions.
As Professor Hantao Liu notes, current systems struggle to show how they reach a decision. Eye-tracking lets AI learn where experts focus and why those regions matter.
How this helps in practice
Dr Richard White explains the gap: computers excel at spotting shapes and textures like lung nodules, but "knowing where to look" is core to radiology training. Standard review areas exist for a reason.
Combining human attention patterns with image features helps AI read chest radiographs more like a trained radiologist. That builds trust and can trim misses tied to poor localization.
Early results
- Up to 1.5% improvement in diagnostic performance when the saliency model is combined with other AI systems.
- Better alignment with expert focus areas on chest X-rays, improving interpretability.
Clinical impact to consider
- Decision support: highlight likely regions of interest before reporting; reduce search time on complex films.
- Quality and safety: more consistent coverage of standard review areas; fewer overlooked findings.
- Training: feedback loops for juniors-compare their gaze patterns to expert saliency maps.
- Workflow: triage and prioritization informed by attention-weighted features.
What to ask vendors and research partners
- Does the model provide saliency maps tied to radiologist attention, not just post-hoc heatmaps?
- How much gain does attention guidance add over your baseline AUC/sensitivity/specificity?
- Can the saliency output integrate into our PACS/RIS with minimal clicks?
- Is there evidence across demographics, devices, and acquisition settings?
- What guardrails exist for poor-quality or atypical studies?
What's next
The team is extending the approach to CT and MRI and exploring cancer detection, where subtle early cues matter and are easy to miss. They also see value in education, simulation, and real-time decision support to help radiologists deliver faster, accurate reports.
For context on workforce pressures, see the Royal College of Radiologists 2024 census.
Bottom line
Attention-aware AI won't replace clinical judgement. It gives you a second set of trained eyes-grounded in how experts actually read films-so you can move faster with fewer blind spots.
If your team is building AI literacy for imaging and clinical decision support, explore practical courses at Complete AI Training by job role.
Your membership also unlocks: