AI-Created Digital Twins Reveal How Brains with Math Disabilities Learn—and Offer New Paths for Help
Stanford researchers used AI and fMRI to create digital twins that mimic brain activity in kids with math struggles. They found hyper-excitability disrupts learning but practice can improve outcomes.

Digital Twins Illuminate Brain Activity in Students with Math Struggles
Researchers at Stanford University have employed artificial intelligence to analyze brain scans of children solving math problems, providing new insights into the neurological basis of math learning disabilities. By combining AI with functional magnetic resonance imaging (fMRI), they developed “digital twins”—personalized deep neural network models—that replicate how individual students approach math tasks.
These models capture not only the accuracy of students’ answers but also simulate their brain activity patterns, revealing where cognitive processes diverge in children facing math challenges. This approach offers a novel window into conditions affecting up to 20% of American students.
Creating Personalized Neural Models
The study involved 45 children aged 7 to 9, including 21 with diagnosed math learning disabilities. While the children solved basic addition and subtraction problems, fMRI recorded their brain activity. The researchers then trained AI models to act as digital twins, mirroring both the behavioral responses and neural patterns of these students.
Key to tuning these models was adjusting a neurological parameter called neural excitability—effectively how strongly neurons fire. This metric is difficult to measure directly in humans without invasive methods, making the AI approach particularly valuable.
Unexpected Findings on Neural Activity
Contrary to previous assumptions that underactivity in certain brain regions might cause learning difficulties, the study found that hyper-excitability—or excessive neural firing—is a core factor in math learning disabilities. The children who struggled showed heightened activity in brain areas critical for numerical processing.
This overactivity appears to cause overlapping neural representations of distinct math problems, leading to confusion and slower learning. In other words, the brain’s signals become muddled, making it harder for the student to identify the correct answer amid excessive neural “noise.”
Implications for Education and Intervention
The digital twin models demonstrated that students with math disabilities need nearly twice as much training to achieve accuracy levels comparable to their peers. Encouragingly, with sufficient practice, these models eventually reach equivalent performance. This suggests that targeted remediation can be effective if tailored appropriately.
Such models could help educators design personalized learning plans, identifying instructional methods best suited to each student's neurological profile. Additionally, testing interventions in silico could speed up the development and refinement of strategies before classroom implementation.
Next Steps in Research
The team is working on extending these models to simulate more complex aspects of mathematical reasoning, aiming to deepen understanding of how different brain mechanisms contribute to learning. While the findings are promising, further refinement of the models is necessary to increase their predictive power and practical utility.
Ultimately, this research lays the groundwork for more effective educational programs that address the specific brain-level challenges students face, offering hope for improved support for children struggling with math.
- Vinod Menon is Professor of Psychiatry and Behavioral Sciences at Stanford and directs the Stanford Cognitive and Systems Neuroscience Laboratory.
- The study was funded in part by the Stanford Institute for Human-Centered AI.
- Co-authors include Stanford postdoctoral scholar Anthony Strock and social science research scholar Percy Mistry.
For those interested in AI’s role in education and cognitive science, exploring advanced AI courses can provide valuable knowledge on neural networks and machine learning applications.