AI and NiceWebRL Propel Naturalistic Computational Cognitive Science from Theory to Real-World Experiments
Kempner Institute researchers use AI and cognitive theory to study thinking in realistic tasks. They debut a framework, NiceWebRL, and a study on goal generalization.

Leveraging modern AI to catalyze a new era in naturalistic computational cognitive science
Scientists at the Kempner Institute are using artificial intelligence to expand how we study human cognition in realistic settings. Kempner Research Fellow Wilka Carvalho and collaborators propose a framework that blends modern AI with cognitive theory, giving researchers practical tools to test how people think and decide in complex, everyday contexts.
The team's work arrives as a coherent package: a theory for "naturalistic computational cognitive science," a software tool to run richer experiments online, and an empirical test of how humans generalize to new goals in large, object-rich environments.
What is "naturalistic computational cognitive science"?
It is an approach that preserves theoretical rigor while increasing ecological validity. The idea is to study cognition in settings that better reflect the variability, ambiguity, and multi-goal structure of daily life-without losing the clarity that controlled experiments demand.
Modern AI models and interactive environments make this feasible. They let researchers build tasks that mirror real choices, compare human behavior with model predictions, and iterate quickly on hypotheses.
Three contributions at a glance
- Framework: A theoretical case for naturalistic computational cognitive science-uniting classic cognitive theory with modern AI tools and scalable experiments.
- Software (NiceWebRL): A tool that helps cognitive scientists design and analyze experiments in increasingly realistic online environments.
- Empirical test: A study using NiceWebRL to evaluate a new theory of how humans generalize to new goals in large worlds with many possible goals and objects.
Why this matters for researchers
- Ecological validity with control: Study decision-making in rich contexts while preserving clear hypotheses and measurable outcomes.
- Theory meets engineering: Compare human behavior with AI agents to stress-test theories of planning, generalization, and credit assignment.
- Scalable experimentation: Run online studies that capture diverse behavior and support robust inference.
- Generalization as a first-class target: Move beyond toy tasks to evaluate how people adapt to new goals and objects.
How NiceWebRL can fit your workflow
- Build interactive, browser-based tasks with multiple goals and objects.
- Instrument experiments to capture action sequences, state transitions, and outcome metrics.
- Compare human participants with AI baselines to test specific theoretical predictions.
- Iterate quickly on task parameters, feedback, and reward structure to probe generalization.
Practical applications you can run next
- Test whether participants transfer strategies across related tasks with different goal configurations.
- Quantify planning depth by perturbing rewards, objects, or constraints mid-task.
- Benchmark model-human gaps on exploration, habit formation, and goal switching.
- Pre-register predictions, run at scale online, and analyze behavior with reproducible pipelines.
Learn more and get started
Explore the work and research community at the Kempner Institute. If you're building skills to implement AI-driven experiments and analysis, browse relevant programs at Complete AI Training.