Discovering Cognitive Strategies with Tiny Recurrent Neural Networks
Understanding how animals and humans learn from experience to make adaptive decisions remains a central focus in neuroscience and psychology. Traditional normative models like Bayesian inference and reinforcement learning offer useful frameworks but often fall short in capturing the nuances of real biological behavior. This gap leads to repeated manual model adjustments that can introduce bias and reduce clarity.
A recent study introduces a fresh approach using tiny artificial neural networks (ANNs) to shed light on the actual decision-making processes of individuals—whether their choices are optimal or not. These small-scale neural networks strike a balance between simplicity and power, enabling detailed insights into the behavior patterns that standard models tend to overlook.
From Assumptions to Actual Behavior
Rather than presuming how the brain should learn to optimize decisions, researchers trained compact recurrent neural networks to learn from behavioral data directly. This method functions like an investigative tool, revealing how decisions are made in practice across humans and animals.
Marcelo Mattar, an assistant professor in psychology, explains that these tiny networks are small enough to be interpretable but still capable of capturing complex behaviors. This approach uncovers decision-making strategies that have escaped scientific notice for decades.
Advantages Over Classical and Large AI Models
Small neural networks outperform classical cognitive models in predicting animal choices because they accommodate suboptimal behaviors instead of assuming perfect rationality. In controlled laboratory tasks, their predictive accuracy matches that of much larger neural networks used in commercial AI applications.
Ji-An Li, a doctoral student involved in the research, highlights that the compact size of these networks allows the use of mathematical tools to interpret the mechanisms behind individual choices—a task considerably harder with large-scale AI models.
Marcus Benna, an assistant professor of neurobiology, adds that while large AI networks excel at prediction, they often lack transparency. Training simpler AI models on animal decision data and analyzing them with physics-inspired methods provides clearer, more understandable explanations of the strategies involved.
Implications for Science and Beyond
The study’s model successfully captured decision-making patterns in humans, non-human primates, and laboratory rats. Importantly, it predicted suboptimal choices, reflecting real-world decision behavior better than traditional models focused on optimality.
Moreover, the model revealed individual differences in decision strategies, emphasizing that each subject may use distinct approaches. This insight parallels how recognizing individual physical differences has transformed medicine and suggests new directions for mental health and cognitive research.
Funding and Access to the Study
- National Science Foundation (grants CNS-1730158, ACI-1540112, ACI-1541349, OAC-1826967, OAC-2112167, CNS-2100237, CNS-2120019)
- Kavli Institute for Brain and Mind
- University of California Office of the President
- UC San Diego’s California Institute for Telecommunications and Information Technology/Qualcomm Institute
The full paper, “Discovering cognitive strategies with tiny recurrent neural networks”, is published in the journal Nature and is open access.
Your membership also unlocks: