IIT Delhi's AILA runs real lab experiments on its own
New Delhi: A research team at IIT Delhi has built an AI agent that can independently run real-world experiments. The system, called AILA (Artificially Intelligent Lab Assistant), operates an Atomic Force Microscope (AFM), makes live decisions, executes measurements, and interprets results without human intervention. The work is published in Nature Communications.
Developed with collaborators from Aalborg University (Denmark), the Leibniz Institute of Photonic Technology and the University of Jena (Germany), AILA moves past chat-based assistance into hands-on lab execution. It handles the full experimental loop inside the instrument.
What AILA actually does
- Drives an AFM at nanoscale resolution, tuning parameters on the fly and acting on live feedback.
- Designs runs, executes them on real hardware, collects data, and interprets outcomes end-to-end.
- Delivers practical time savings: "Earlier, optimising microscope parameters for high-resolution, noise-free images would take an entire day. Now the same task is completed in just seven to ten minutes," said PhD scholar Indrajeet Mandal.
"Previously, AI could help you write about science. Now it can actually do science, designing experiments, running them on real equipment, collecting data and interpreting results," said professor N. M. Anoop Krishnan of IIT Delhi.
Professor Nitya Nand Gosvami added, "Operating an Atomic Force Microscope requires deep understanding of nanoscale physics and real-time feedback control. The fact that AILA can autonomously perform these tasks represents a paradigm shift in experimental science."
Why this matters for scientists and R&D teams
- Throughput and focus: Routine optimization and data collection move to the agent, freeing researchers to focus on hypotheses and interpretation.
- Reproducibility: Stable, scripted control reduces operator variance and enables consistent protocols across sites.
- Skill transfer: Hard-won instrument know-how can be encoded, shared, and improved over time.
- Access: According to the team, systems like AILA could help institutions with limited specialist staff run high-end experiments.
What the team learned about AI in real labs
Performance on theoretical or benchmark tasks doesn't guarantee success at the bench. Real instruments demand quick adaptation and safe control under uncertainty. As Mandal put it, "It's like knowing traffic rules versus driving in busy city traffic."
The researchers also flagged safety. AI agents can sometimes drift from instructions, so guardrails are essential to avoid accidents and protect expensive equipment.
Safety and governance you should plan for
- Hardware interlocks, emergency stops, soft limits, and current/force thresholds on instruments.
- Watchdogs for command sanity checks, rate limits, and automatic fallbacks to safe states.
- Human-in-the-loop for new protocols, with staged autonomy (simulation or dry-run, then supervised, then autonomous).
- Audit trails: full logging of decisions, parameters, data, and firmware/software versions.
- Clear escalation paths and incident response if the agent deviates from plan.
How to explore this in your lab
- Pick one instrument with stable APIs and logging. Start by automating a narrow task (e.g., parameter optimization, routine scans).
- Codify your best practices: limits, checklists, quality metrics, and "stop" conditions.
- Establish a validation loop: compare agent runs with expert baselines, track error modes, and iterate.
- Budget time for data engineering: clean datasets, metadata standards, and versioned configurations.
- Roll out gradually to additional instruments only after reliability and safety targets are met.
Policy and ecosystem context
This work fits with India's broader AI-for-Science agenda, backed by new funding via the Anusandhan National Research Foundation (ANRF). The team sees potential for advances in energy storage, sustainable materials, and advanced manufacturing, and expects India's leadership in autonomous experimentation to attract collaboration and investment.
Key takeaways
- AILA shows that lab-grade autonomy on complex tools like AFMs is feasible today.
- The biggest wins are time, consistency, and the ability to encode expert practice.
- Safety engineering, governance, and staged deployment are non-negotiable.
If your team is upskilling for AI-assisted experimentation and automation, see curated options by role at Complete AI Training, or explore targeted AI Research Courses focused on lab-centered AI and research workflows.
Your membership also unlocks: