DeepMind CEO argues AI's highest purpose is scientific discovery, not consumer products
Demis Hassabis, the CEO of Google DeepMind, has outlined two distinct paths for artificial intelligence in a recent interview: one focused on scientific problems and human health, the other pulled toward the competitive race for more powerful systems. He argues the first path deserves priority.
The distinction matters for researchers and scientists who work with AI tools. Hassabis frames AI not as a replacement for human judgment, but as a system that can reveal patterns invisible to researchers and accelerate work that would otherwise take decades.
AlphaFold as proof of concept
Hassabis pointed to protein structure prediction as the clearest example. For decades, biologists spent years and significant funding measuring protein structures in the lab. Understanding these structures is fundamental to drug design and disease research.
AlphaFold solved the problem at scale and speed that only AI could achieve. More tellingly, DeepMind released the results publicly rather than commercializing them. This choice reflected a core belief: if AI can accelerate science, locking it behind paywalls defeats the purpose.
The decision elevated AlphaFold beyond a technical demonstration. It established AI as a discovery tool, not just an information tool.
The tension between science and competition
DeepMind's original vision resembled CERN-a long-term institution where researchers studied intelligence itself methodically and systematically. Hassabis said the company wanted to ask foundational questions: What is intelligence? How do systems learn, reason, and generalize?
That vision collided with reality after ChatGPT's release. The AI industry shifted toward rapid product cycles, infrastructure competition, and capital races. Hassabis acknowledged the costs: AI is increasingly seen as a short-term commercial sprint rather than a project that will reshape science and civilization.
DeepMind now operates in a tug-of-war between scientific idealism and competitive logic. Neither has won completely.
What Hassabis means by "creativity" in AI
Hassabis distinguished between systems that encode existing human knowledge and systems that learn and explore independently. Early AI systems could only perform narrow tasks. What DeepMind builds is different.
Move 37 in AlphaGo's match against Lee Sedol became significant not because it won the game, but because it showed AI exploring paths humans had never considered. That ability-to find solutions from first principles through self-play and optimization-is what Hassabis calls true creativity.
AlphaZero took this further by learning chess from scratch without human records. The value wasn't winning games. It was demonstrating that systems could find solutions humans might not pre-design.
Applied to science, this matters enormously. Materials science, drug design, and chip design all involve enormous search spaces where human intuition fails. AI systems that can explore and propose new paths could rewrite entire fields.
AGI as an autonomous agent, not a smarter chatbot
When Hassabis discusses AGI, he moves past the "smarter ChatGPT" framing common in industry discussions. He focuses on whether AI can become an action-oriented system that plans, executes, and interacts with the real world continuously.
This distinction shapes how he views risk. He separates dual-use risks-where the same technology enables both good and harm-from risks inherent to autonomous systems. As AI becomes more agentic, security problems change. The question shifts from "will it say something wrong?" to "will it take unexpected actions in long-chain tasks?" and "will it deviate from human intentions?"
This is why Hassabis repeatedly emphasizes that guardrails, evaluation systems, and international cooperation must advance alongside capabilities.
Reframing what AI is worth doing
Consumer-grade AI-chatting, image generation, writing summaries-shapes public perception. These applications have real value and commercial importance. But they risk reducing AI to a "more powerful digital assistant."
Hassabis argues the actual potential is broader. AI could make possible scientific advances that would otherwise take ten or twenty years. It could help researchers find patterns, verify hypotheses, and move closer to answers faster in proteins, drugs, materials, energy, and computing.
This reflects the fundamental difference between DeepMind and pure product companies. Product companies compete for user attention. DeepMind tries to prove AI is a new scientific method, not just new software.
For researchers evaluating where to apply AI, this framing suggests the highest-value work may not be automating existing tasks. It may be accelerating research that defines the next decade of scientific progress.
Learn more about AI for Science & Research and how these tools are applied in academic and laboratory settings.
Your membership also unlocks: