Fast Rules, Slow Practice: Shared Learning Dynamics in Humans and AI
Brown researchers show AI gains fast in-context skills after extensive practice, echoing human memory systems. Findings guide training for assistants that adapt without forgetting.

Researchers uncover similarities between human and AI learning
New work from Brown University shows that flexible, in-context learning in AI emerges from the same training process that strengthens slower, incremental learning. The pattern mirrors how human working memory and long-term memory interact. The study was published in the Proceedings of the National Academy of Sciences.
Core idea: after enough incremental experience, an AI system starts to generalize quickly in new situations-much like people do after many exposures to related tasks.
Why this matters for scientists and AI builders
- Links two learning modes in both humans and AI: flexible, rapid "in-context" learning and slower, cumulative "incremental" learning.
- Explains variability in human learning behavior across tasks without assuming separate mechanisms for every context.
- Guides training strategies for AI assistants that must adapt on the fly while retaining what matters.
How the team tested the idea
The team trained an AI with meta-learning, which teaches systems how to learn across many tasks. After exposure to thousands of related problems, the model began to show in-context learning that wasn't present early on.
In one test adapted from human studies, the AI learned lists of colors and animals. After being challenged with 12,000 variations, it could correctly handle new combinations (e.g., a green giraffe) it hadn't seen together before. Flexible behavior appeared after sustained incremental practice.
Key findings
- Interplay of memory systems: The interaction between quick, context-based behavior and slower accumulation of knowledge in AI resembles human working memory and long-term memory.
- Experience before flexibility: Fast generalization tends to emerge only after enough incremental learning has occurred.
- Retention-flexibility trade-off: Harder tasks that trigger errors lead to stronger retention, while error-free, in-context performance boosts flexibility but engages long-term storage less.
As one researcher put it, these results help explain why a person can look like a rule-based learner in some settings and an incremental learner in others. Another noted that by the hundredth board game, you can pick up new rules quickly-even for games you've never seen before.
Implications for cognitive science
The findings connect long-studied human phenomena-error-driven updates to long-term memory and the agility of working memory-within a single computational training regime. The pattern held across multiple tasks, bringing together aspects of human learning that had not been grouped this way.
Design notes for building more intuitive AI assistants
- Use curricula that first strengthen incremental learning, then probe for in-context generalization.
- Inject calibrated difficulty to promote retention; track error patterns that trigger durable updates.
- Evaluate recombination ability (e.g., novel pairings of known concepts) as a marker of flexible learning.
- Balance flexibility with stability to avoid forgetting critical skills in sensitive domains such as mental health.
Who did the work
The study was led by researchers at Brown University with appointments spanning computer science and brain science, including laboratories directed by Michael Frank and Ellie Pavlick. It was supported by the Office of Naval Research and the National Institute of General Medical Sciences Centers of Biomedical Research Excellence.
Publication: Proceedings of the National Academy of Sciences (PNAS). Related institute: Brown's Carney Institute for Brain Science (carney.brown.edu).
For practitioners
If you're building AI assistants or evaluation pipelines, consider structured meta-learning, error-aware training, and recombination tests as part of your workflow. For curated skills development, see our updated course lists: Latest AI courses.
Source: Brown University