AI at a Second Inflection Point: Rafael Gómez-Bombarelli's Mission to Accelerate Science

MIT's Rafael Gómez-Bombarelli says science has hit a second AI inflection point: language meets structure and synthesis. His lab blends physics and ML to find materials faster.

Categorized in: AI News Science and Research
Published on: Feb 13, 2026
AI at a Second Inflection Point: Rafael Gómez-Bombarelli's Mission to Accelerate Science

AI's second inflection point in science: Rafael Gómez-Bombarelli's playbook

For more than a decade, MIT Associate Professor Rafael Gómez-Bombarelli has applied AI to create new materials. Batteries, catalysts, plastics, OLEDs - his group has used physics-based simulations plus machine learning to find candidates worth building in the lab.

He believes we've hit a second inflection point. The first came around 2015 with representation learning, generative models, and high-throughput data. The next is here: mixing language with structure and synthesis data to form general scientific intelligence that can reason across modalities.

From experiments to simulations

Gómez-Bombarelli grew up in Spain, studied chemistry at the University of Salamanca, and won a national Chemistry Olympics in 2001. His PhD began at the bench studying DNA-damaging chemicals, then shifted midstream to simulations - the moment he realized software could expand what a scientist can test.

After a postdoc in Scotland on quantum effects in biology, he joined Alán Aspuru-Guzik's group at Harvard in 2014. He was among the first to apply neural networks to molecules (2015) and generative AI to chemistry (2016), while automating simulations to run at high throughput.

That work led to a materials computation startup that later pivoted to OLEDs. Building products was hard and clarifying. When a role opened at MIT in 2018, he applied - and found a wider canvas for the same mission.

Inside the MIT lab

Nine years in, his group is fully computational. They study how composition, structure, and reactivity drive performance, and they build tools that merge deep learning with physics-based modeling. The aim is simple: generate candidates, simulate fast, simulate better, and pass the best ideas to experimental partners.

High-throughput simulations feed models. Models suggest new structures and synthesis paths. Each cycle adds data and improves the next. The group partners closely with industry and programs like MIT's Industrial Liaison Program to keep targets practical and deployment-focused.

Why this moment matters

What was speculative a decade ago has become normal. Major labs at companies like Meta, Microsoft, and DeepMind now run large-scale physics-informed models. Agencies are moving too; see the U.S. Department of Energy's initiatives around AI for science workstreams (energy.gov/ai).

The key insight: humans think and write in language, and language models can now connect literature, structure, and synthesis. We've seen scaling work for simulations and for language. The next test is scaling the full scientific loop.

Companies and real-world push

Beyond academia, Gómez-Bombarelli has co-founded multiple startups and advised teams in drug discovery, robotics, and materials. His latest, Lila Sciences, is building a scientific superintelligence platform for life sciences, chemical, and materials industries - with the goal of making research workflows far more productive.

How to prepare your lab or R&D team

  • Connect modalities: link text (papers, ELNs), structure (molecules, crystals), and process data (synthesis, parameters) in one pipeline.
  • Use physics-informed ML: start with fast, coarse filters; reserve high-fidelity simulations for finalists.
  • Automate the boring parts: remove manual steps in simulation, data curation, and job submission.
  • Pair with experimentalists early: let computation triage ideas; let experiments ground truth and feed the loop.
  • Apply LLMs where they shine: literature mapping, hypothesis drafting, and synthesis planning - always with human review.
  • Build a data flywheel: version models, data, and workflows; log every run so results are reproducible and compound over time.
  • Aim at deployment: prioritize problems with clear specs and measurable constraints from industry partners.
  • Invest in culture: reward sharing, not hoarding; publish tools that others can use; make collaboration the default.

Team and culture

Gómez-Bombarelli runs a group of about 25 graduate students and postdocs. The ethos is positive-sum: diverse people, different goals, one system that lets everyone do their best work. He once avoided the professor path; now he's the one nudging others to apply - even after the deadline.

Bottom line

General scientific intelligence won't arrive as a single model. It will show up as tight loops that connect language, structures, and synthesis to deliver results faster. Build those loops now and your lab will feel the inflection point, not watch it from the sidelines.

If you're upskilling your team on LLMs, automation, and analytics for research workflows, explore these curated paths by role: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)