The Silicon Illusion: Why AI Cannot Substitute for Scientific Understanding
The claim that artificial intelligence is transforming science is widespread. From Nobel Prizes to major biotech deals, AI is often portrayed as the key to new scientific breakthroughs. Yet, this narrative overlooks a deeper problem: AI, as currently applied, often hides more than it clarifies and can worsen the very issues it promises to solve.
In their critique, Emily M. Bender and Alex Hanna warn that AI's role in science risks causing real harm. Even positive reviews acknowledge AI's "promising capabilities" but also express caution. This ambivalence casts scientists in the role of sorcerer's apprentices—pushing forward with powerful tools they barely control, echoing historical anxieties about unchecked technology.
History reminds us that technology does not always work as intended. For example, a Vietnam War-era program called Igloo White, which used electronic sensors and computers to guide bombing campaigns, turned out to be ineffective and easily fooled. It became a digital facade rather than a functional system. Today’s AI discussions similarly mix spectacle with uncertainty, making it hard to separate genuine progress from hype.
Scholars Lisa Messieri and M.J. Crockett have warned that the widespread use of AI in science might lead to producing more data but less genuine insight. This threatens the reliability of scientific knowledge and adds intellectual risks on top of existing moral concerns.
Science in Crisis
AI’s rise in science coincides with a broader intellectual crisis in research. Pharmaceutical development, for example, has struggled for decades with stagnation and failed promises. The Human Genome Project was expected to unlock many new medicines, but the results fell short. Industry leaders have admitted that reorganizations and process improvements haven’t increased the output of new medicines.
In the early 2000s, computer-generated data was often dismissed as less valid than lab-based experiments. The late biologist Carl Woese criticized the push to treat biology like an engineering discipline, arguing that it shifts science away from seeking true knowledge toward just changing the living world. He traced this problem back to a mechanistic view of science dominant since the 19th century and called for a return to a more holistic approach.
Economist Philip Mirowski also pointed to the influence of neoliberalism in hollowing out scientific rigor and creativity. According to him, the obsession with technological solutions has not delivered results that benefit society and has even affected basic science in universities.
Proteins Are Not Silicon Chips
It is within this troubled context that AI’s latest surge in science has appeared. A 2023 OECD report suggests AI might help because science is becoming more difficult. The AI system AlphaFold 2, honored with a Nobel Prize in Chemistry in 2024, predicted the structures of millions of proteins. However, verifying these predictions experimentally is nearly impossible due to the complex and time-consuming nature of protein analysis.
Philosopher Daria Zakharova argued that AlphaFold’s predictions count as scientific knowledge because scientists trust them. But since few predictions have been experimentally confirmed, their reliability remains uncertain. Importantly, AlphaFold models the behavior of proteins through silicon-based computation, which raises fundamental questions about whether such unrelated materials can accurately represent biological molecules.
Some studies highlight discrepancies: for example, the predicted structures for serpins, a vital class of proteins, did not match experimental models. This calls for more research on AI’s reliability as a predictive tool and a pause to consider why we invest limited scientific resources into exploring connections between fundamentally different materials without strong evidence.
Proteins and cells are dynamic and complex, resisting simple models. The common high school analogy of proteins as static locks and keys oversimplifies the reality. Historical protein models were based on physical data, unlike AI-generated structures, which are computational constructs. The real challenge is to identify and question assumptions that mislead research.
Crucial laboratory techniques such as Western blotting, protein purification, and molecular biology remain essential for probing proteins. These methods rely on mechanical and chemical technologies rather than AI hype.
Mansions of Straw
William G. Kaelin Jr., a 2019 Nobel Prize winner in Physiology or Medicine, cautioned that biomedical research must build “houses of brick” rather than “mansions of straw.” He highlighted a troubling trend where scientific papers aim for broad claims rather than validated conclusions. This could easily apply to AI-driven studies.
The UK provides a case in point. Its Biobank, holding genetic data from a segment of the population, recently partnered with pharmaceutical firms and Alphabet's Calico for AI studies. While publicized as a breakthrough for understanding the human body, the dataset itself is not representative of the population, skewed toward healthier and less deprived groups. This raises serious doubts about the reliability of conclusions drawn from such data combined with opaque AI methods.
Kaelin’s advice is clear: scientific claims should be judged on whether they are likely to be correct, not just on their potential importance.
Efforts like Isabelle Stenger’s “slow science” propose slowing down to scrutinize evidence more carefully and encourage a public service attitude among researchers. So far, such reforms have been cautious and limited. Investors favor AI because it scales with capital investment and offers the illusion of progress without demanding systemic change.
The spectacle of AI suggests quick, cheap breakthroughs, but nature’s secrets do not yield easily. Knowledge without genuine insight is no knowledge at all.
Your membership also unlocks:
 
             
             
                            
                            
                           