UC Berkeley psychologist Alison Gopnik argues children outlearn AI through exploration and care

Children outperform AI at open-ended discovery - a 4-year-old figures out a novel toy faster than a college student. AI excels at pattern prediction but can't explore without a goal, says UC Berkeley psychologist Alison Gopnik.

Published on: Apr 18, 2026
UC Berkeley psychologist Alison Gopnik argues children outlearn AI through exploration and care

Why a 4-Year-Old Outsmarts AI at Learning

Large language models excel at predicting patterns in existing data. They fail at what children do naturally: exploring the world without a predetermined goal.

That gap reveals a fundamental truth about intelligence itself. It's not a single capacity. It's a collection of distinct cognitive modes distributed across different life stages, according to UC Berkeley developmental psychologist Alison Gopnik.

Gopnik has spent decades watching children solve problems that stump adults. When given a novel toy with no obvious function, young children outperform college students at figuring out how it works. Adults test the most likely possibilities and get stuck. Children play, experiment, and discover.

The Stone Soup Problem

Current AI systems don't work like independent agents with their own intelligence. They work like stone soup.

The old story goes: travelers arrive in a village claiming they can make soup from stones. They need water, then suggest an onion would help, then carrots, then meat. Each villager contributes something until a full meal emerges. No single ingredient created the soup. The combination did.

Large language models operate the same way. Tech companies combine data from billions of texts, images, and books. They add reinforcement learning from human feedback. They incorporate prompt engineering-humans figuring out exactly how to ask questions to get useful answers. The result appears intelligent, but the intelligence comes from combining human knowledge and labor, not from the system itself.

This matters because it reframes what we're actually building. These systems provide real value. They're useful tools. But they're not the independent agents that folklore and popular culture suggest.

What Children Reveal About Learning

Children are optimized for exploration. They test hypotheses through play. They seek what researchers call "empowerment"-the ability to control outcomes through their actions.

A 1-year-old given a xylophone doesn't just bang it randomly. He tries the mallet, then the stick end, then his hand. He tests different bars. Through this play, he learns the causal relationship between his actions and the sounds produced-something that didn't exist in human evolutionary history and must be learned fresh.

Adults are optimized for exploitation. They have a goal and execute it efficiently. This creates a trade-off. Time spent exploring is time not spent producing.

Evolution solved this through life history. Childhood provides a protected period for exploration. Adults handle exploitation-finding food, securing resources, reproducing. Grandparents provide care while transmitting cultural knowledge to the next generation.

The Missing Ingredient in AI

Gopnik identifies three distinct types of intelligence: exploration (learning about the world), exploitation (acting effectively toward goals), and care (helping other agents achieve their goals).

Current AI systems handle exploitation well. They can predict the next word in a sequence based on patterns in training data. They cannot explore in the way children do-testing novel hypotheses without external reward signals.

More importantly, they lack the "intelligence of care." Human caregivers protect children while they explore. A mother's presence signals safety. A child will venture toward an unpleasant stimulus if the mother is nearby, but not without her. This allows exploration to happen without catastrophic risk.

Building AI systems with human-level intelligence would require designing systems that develop over time, change through experience, and receive ongoing care from other intelligent agents. That's fundamentally different from training a model once and deploying it.

What This Means for AI Development

The field has invested heavily in the assumption that a single measure of "general intelligence" exists. Gopnik's research suggests otherwise.

Intelligence is domain-specific and life-stage-specific. A 35-year-old is good at exploitation. A 4-year-old is good at exploration. A 70-year-old is good at transmission and care. None of these represents peak intelligence-they represent different intelligences suited to different problems.

For AI to match human capabilities, researchers would need to move beyond predicting patterns in static data. They'd need systems that actively experiment, learn from interaction, and develop over time. That's a different engineering problem entirely.

The good news: we already know how to build systems like that. We've been doing it for millennia. We call it raising children.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)