Eli Lilly teams with NVIDIA to build a pharma-scale AI supercomputer
Eli Lilly is partnering with NVIDIA to stand up what it calls an "AI factory" - a supercomputer and software stack intended to run millions of experiments, test potential medicines, accelerate clinical development, and refine biomarkers from medical imaging. The company says this shift is about compressing timelines from years to months and getting new therapies to patients sooner.
"Lilly is shifting from using AI as a tool to embracing it as a scientific collaborator," said Thomas Fuchs, Senior VP and Chief AI Officer at Lilly. "By embedding intelligence into every layer of our workflows, we're opening the door to a new kind of enterprise: one that learns, adapts and improves with every data point."
"The AI industrial revolution will have its most profound impact on medicine, transforming how we understand biology," said Kimberly Powell, Vice President of Health Care at NVIDIA. "Modern AI factories are becoming the new instrument of science - enabling the shift from trial-and-error discovery to a more intentional design of medicines."
One expected benefit: fewer animal studies. "We are getting to the point where we don't actually need to do that (animal testing) anymore," said Patrick Smith, President of Drug Development at Certara.
Analysts estimate AI-first pipelines could cut development costs and timelines roughly in half. Today, bringing a new drug to market typically takes over a decade and around $2 billion. Financial terms of the partnership weren't disclosed. Lilly says hardware has started arriving at its Indianapolis data center, with the system targeted to go online by January.
What "AI factory" implies for IT and development teams
- Scaled compute: GPU clusters running many concurrent experiments across molecular design, simulation, and multimodal analysis (e.g., imaging, omics, clinical text).
- Data backbone: governed ingestion, curation, and versioning; reproducible datasets; strong lineage from raw to features to model outputs; audit-ready logs.
- Orchestrated ML pipelines: containerized training and inference, distributed scheduling, experiment tracking, registries, and CI/CD for models and data.
- Clinical-grade validation: bias testing, stability checks, statistical rigor, and documentation that stands up to GxP and regulatory review.
- Privacy and security: PHI handling, de-identification, role-based access, encryption, and isolation between research and production workloads.
- Observability and cost control: per-experiment metering, quota policies, auto-scaling, and job preemption to avoid idle GPUs and runaway spend.
How it could work in practice
- Run millions of in-silico experiments to explore chemical space before touching a wet lab.
- Use advanced imaging pipelines to track disease progression and surface digital biomarkers for more precise trial endpoints.
- Automate closed-loop discovery: generate candidates, simulate and score, select top hits, and feed back discoveries to retrain models.
- Tight feedback from clinical data to research models to shorten iteration cycles and focus on the most promising directions.
Why this matters
- Shorter cycles: Faster go/no-go decisions in discovery and earlier signal detection in trials.
- Quality bar: Better instrumentation and reproducibility improve confidence in results and reduce rework.
- Ethics and efficiency: More modeling and simulation can reduce animal use and early-stage lab overhead.
Practical steps if you're building similar systems
- Start with a narrow, high-value use case (e.g., an imaging biomarker) and define hard metrics upfront: accuracy, sensitivity/specificity, turnaround time, and compute cost per result.
- Implement dataset versioning and immutable artifacts; require every model output to be traceable back to code, data, and parameters.
- Separate environments for research and regulated workflows; automate promotion with checks for validation, bias, and documentation completeness.
- Form a cross-functional review group with clinicians, statisticians, ML engineers, SRE, and compliance to pressure-test findings before production.
Key quotes
"Lilly is shifting from using AI as a tool to embracing it as a scientific collaborator... This isn't just about speed, but rather interrogating biology at scale, deepening our understanding of disease and translating that knowledge into meaningful advances for people." - Thomas Fuchs, Eli Lilly
"Modern AI factories are becoming the new instrument of science - enabling the shift from trial-and-error discovery to a more intentional design of medicines." - Kimberly Powell, NVIDIA
"We are getting to the point where we don't actually need to do that (animal testing) anymore." - Patrick Smith, Certara
Resources
- NVIDIA Healthcare & Life Sciences
- FDA: AI/ML in health and biomedical research
- Complete AI Training: Courses by leading AI companies
Your membership also unlocks: