"If you understand the risks, AI is an incredible tool"
Thomas Moerland studied medicine and mathematics in Leiden and has a lifelong fascination with intelligence. He brings that mix to life in his popular science book Van IQ naar AI (From IQ to AI). His core idea is simple: treat intelligence as a mechanism you can probe, test, and improve.
The brain still has a massive head start
Early AI tried to copy the brain. That stalled because human cognition runs at immense speed and scale, and the data demands were beyond reach.
So researchers leaned on math inspired by biology instead. Deep, layered networks emerged, and large-scale AI went from theory to practice. "This approach works," Moerland says, "but it pushed brains and math further apart. We underestimate how deeply connected they are."
AI can teach us about our own brains
We tend to label human intelligence as psychological and artificial intelligence as mathematical. That split feeds a gut reaction: it's just a computer, so it isn't real. Moerland argues for a merger of both views.
We still cannot read every parameter in a modern model, but we do grasp the mechanisms that make learning possible. That gives us testbeds to ask sharper questions about adaptation in the brain. "How exactly does our brain adapt?" he asks. The right AI experiments can point to where to look.
And the brain keeps AI honest
Comparisons that dismiss AI as too slow-or crown it as superior-miss the base condition. Humans have millions of years of evolution as a head start. Machines, by contrast, can compute faster and process more data, but they lack that long pre-training.
The practical takeaway: AI progress needs human guidance. Our priors, values, and research questions shape what is worth building.
Mathematics and the human touch
Moerland's message is a call for rigor and responsibility. Pair mathematical clarity with human oversight. Communicate how the methods work and what can go wrong.
"If you understand that-and the risks-AI is an incredible tool." For scientists, that means clear model specifications, careful evaluation, and a bias toward mechanisms you can test.
Practical principles for research teams
- Use mathematical models to formalize hypotheses; use biological insights to choose what to test.
- Prototype adaptation with reinforcement learning to probe how goals, rewards, and constraints shape behavior.
- Focus on reliability: stress-test distribution shifts, measure calibration, and document failure modes.
- Treat compute as a constraint, not a goal. Partner with industry for scale; keep universities focused on core ideas.
- Make risks explicit: data bias, privacy leakage, mis-specification of objectives, and feedback loops in deployment.
Research in Leiden
Moerland earned a PhD in computer science and works at the Leiden Institute of Advanced Computer Science (LIACS). His focus is reinforcement learning-learning through reward and punishment. At LIACS, the emphasis is on fundamental questions and new algorithms, while industry handles massive data centers and extreme compute budgets.
That split suits science. "As a knowledge institution, we're much better placed to ask the bigger and broader questions, even if they don't have immediate economic value."
Suggested resources
Build team capability
If your lab or R&D group needs practical upskilling in applied AI, see course paths by role and domain.
Bottom line: Treat intelligence as a mechanism. Let brains inform algorithms and let algorithms probe brains. With clear goals and risk awareness, AI becomes a reliable instrument for science.
Your membership also unlocks: