Duke's ATOMIC AI Microscope Analyzes 2D Materials With Near-Human Accuracy, 10x Faster

ATOMIC, an AI-driven microscope at Duke, spots monolayers and defects with near-human accuracy and runs the scope itself. Labs see 10x faster screening and fewer bottlenecks.

Categorized in: AI News Science and Research
Published on: Nov 02, 2025
Duke's ATOMIC AI Microscope Analyzes 2D Materials With Near-Human Accuracy, 10x Faster

AI Microscope Works Like a Seasoned Researcher-Only Faster

Inside Haozhe "Harry" Wang's lab at Duke University, a new teammate sits next to a standard optical microscope. It doesn't drink coffee, doesn't get tired, and doesn't need months of training to spot monolayers or defects. The team calls it ATOMIC-Autonomous Technology for Optical Microscopy and Intelligent Characterization.

ATOMIC analyzes, classifies, and prioritizes 2D material samples with near-human accuracy, then moves on to the next field of view without waiting for instructions. For materials scientists, that means fewer bottlenecks in sample selection, better consistency, and a faster path from raw flakes to usable devices.

Why this matters for 2D materials work

Working with graphene, MoS2, WS2, hBN, and similar crystals is unforgiving. Thickness uniformity, grain boundaries, and microscopic defects can sink an experiment before it starts. Traditionally, a trained researcher scans thousands of images and makes judgment calls across focus, lighting, and sample quality.

ATOMIC changes the loop. It identifies flakes, estimates layer numbers from color/contrast, assesses uniformity, logs results, and decides where to scan next. The system handles the grind so scientists can focus on device design, hypotheses, and follow-up experiments.

How ATOMIC works under the hood

  • Foundation models: The team uses large vision-language models that generalize to new samples with little or no retraining (zero-shot autonomous microscopy).
  • Closed-loop control: ChatGPT manages reasoning and instrument control (focus, illumination, stage moves), while Meta's Segment Anything Model (SAM) segments flakes and defects.
  • Text-to-instrument intent: You can prompt it with "Find monolayer graphene flakes," and it parses that into search, scan, and classification steps.

Performance highlights

  • Up to 99.4% accuracy identifying layer counts and assessing quality in tests reported by the team.
  • More than 90% accuracy across materials like graphene, WS2, and hBN, without retraining on new unlabeled samples.
  • High sensitivity in low-light and slightly out-of-focus conditions; it can flag grain boundaries that are easy to miss by eye.
  • Throughput gains near 10x-days of manual screening compressed into hours.

"The system we built doesn't just follow directions; it understands them," Wang said. Jingyun "Jolene" Yang added, "It can identify grain boundaries at scales that are difficult for people to see."

What this enables in your lab

  • Fast triage: Sort thousands of fields of view, flag the best flakes, and prioritize device-worthy regions.
  • Consistent labeling: Reduce subjective calls on thickness and uniformity; improve reproducibility across operators.
  • Smarter scanning: Let the tool decide the next best area to inspect based on a live map of quality and coverage.
  • Focus where it counts: Shift human time to hypothesis design, multi-modal validation, and device fabrication.

Limits you should plan around

  • Data diversity: Accuracy depends on the variety and quality of images seen by the foundation models. Formatting textures or lighting changes can confuse classification.
  • Optical ceiling: Optical microscopy can't match atomic-scale tools like TEM. Expect strong triage and pre-screening, not full structural ground truth.
  • Domain drift: New substrates, illumination spectra, or camera pipelines may require prompt tweaks, calibration scans, or small adaptation routines.

Where this is heading

  • Multi-modal stacks: Fusing optical, Raman/PL, AFM, and electron microscopy for richer, cross-validated labels.
  • Adaptive resolution: Automatic handoff from wide-field scans to high-NA, high-resolution passes on regions of interest.
  • Autonomous workflows: Orchestrating microscopes, stages, and robotic handlers to run experiments end-to-end-data collection, analysis, and iterative follow-ups.

Practical playbook to get started

  • Instrument readiness: Stabilize illumination, calibrate color response, and standardize exposure/focus routines to cut variance.
  • Prompt library: Create reusable prompts for your substrates, targets (e.g., "monolayer graphene, >20 ยตm continuous, low contamination"), and quality thresholds.
  • Ground-truth set: Build a small, trusted validation set (vendor standards, AFM-verified flakes) to benchmark and tune the system in your environment.
  • Feedback loop: Keep human-in-the-loop checks for edge cases; log disagreements and retrace failures to improve prompts and settings.
  • Traceable outputs: Save masks, layer estimates, confidence metrics, and image metadata for audit and downstream analysis.

For the published study and technical details, see the report in ACS Nano.

If you're building autonomous microscopy or similar lab automations, a curated set of AI automation resources can help accelerate prototyping and evaluation: Complete AI Training - Automation.

Bottom line

ATOMIC shows how vision-language models can take over the repetitive parts of 2D materials characterization while keeping scientists in charge of direction and interpretation. Faster screening and higher consistency free up time for the work that actually moves projects forward-designing better experiments and building better devices.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)