AI predicts without understanding, and that may be a problem science cannot ignore

AI can predict protein structures and neural firing patterns accurately, but offers no underlying principles humans can reason from. A model that memorizes inputs without grasping structure can't generalize the way a real theory does.

Published on: Apr 27, 2026
AI predicts without understanding, and that may be a problem science cannot ignore

AI Can Predict. But Can It Explain?

For most of science's history, prediction and understanding moved together. Alan Hodgkin and Andrew Huxley captured the action potential with four variables. Their equations predicted the waveform and explained it: sodium flowing in, potassium flowing out, the interplay between them generating a spike. You couldn't separate the two.

Artificial intelligence is doing exactly that.

AlphaFold predicts protein structures with extraordinary accuracy. But it offers no model a human can reason from, no moment of clarity. It replaces understanding rather than enabling it.

Neuroscience is following the same trajectory. Transformer models now predict neural firing patterns from brain recordings. Foundation models are ingesting large calcium imaging datasets. Scientific AI companies-from startups to projects inside Alphabet, Anthropic, and OpenAI-are racing to automate discovery from big data.

These tools may deliver answers without insight. They bypass the step where complex phenomena get distilled into simpler principles. We couldn't understand the brain before. Now we can't understand the model of the brain either.

Why compression matters

A theory is a short description that accounts for a much larger body of observations. The Hodgkin-Huxley model reduced a spike to four variables. The ring attractor model reduced head-direction tuning to a single equation. Because these theories are compressed, they fit in a human head. You can mentally simulate them. That capacity to mentally simulate is what produces understanding.

AI models have no such constraint. They can fit vastly more into their "heads," which means their internal models can be far less compressed and far less legible. AI doesn't need to understand.

This raises an uncomfortable question: If AI delivers accurate predictions, and those predictions lead to drugs that work or stimulation patterns that suppress seizures, does understanding still matter?

The funding numbers suggest it doesn't. The U.S. National Endowment for the Arts receives roughly $200 million annually. The National Institutes of Health and National Science Foundation combined receive over $50 billion. Society doesn't fund science at 250 times the level of arts because it finds understanding beautiful. If predictions arrive without understanding, the public will likely accept that.

A cautionary case

Recent research suggests we shouldn't surrender so quickly.

Researchers simulated tens of millions of planetary orbits from Newton's laws and trained a transformer on the sequences. The model predicted future positions with high accuracy. But when fine-tuned to infer the underlying gravitational force vectors, it produced nonsense. The implied laws of gravitation changed depending on which data subset researchers examined. The transformer had assembled a patchwork of heuristics accurate for every solar system in the training set, but it hadn't discovered the universal principle.

Without that principle, the model could predict points of light in the sky but never send a rocket to the moon.

The Ptolemaic astronomers faced the same problem. Their geocentric model predicted planetary positions with impressive precision for a thousand years by stacking epicycles. When Newton replaced it, predictive accuracy barely improved. What changed was compression: a single law explained every orbit, falling apples, and ocean tides.

A transformer trained on motor cortex recordings might predict held-out firing rates beautifully but fail to tell us what the circuit actually computes.

Understanding generalizes

David Hubel and Torsten Wiesel's discovery of oriented receptive fields in V1 didn't just describe neurons. It gave us feature detection hierarchies, a framework that generalized across sensory cortices and inspired the convolutional neural networks powering computer vision today.

Drift-diffusion models of decision-making started in psychophysics and ended up explaining single-neuron ramping activity in the lateral intraparietal area. That creative leap from one domain to another is what compression buys you. A model that has memorized input-output relationships without learning underlying structure will never make that leap.

The compression step-collapsing sprawling datasets into something portable and teachable-remains a human activity. AI models can predict. They have not yet learned to explain. The research on planetary orbits suggests that the drive toward understanding isn't vanity. Even in the age of large AI models, it may remain the most important job.

Learn more about AI for Science & Research and how these tools are being developed and deployed.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)