ChatGPT Materials Explorer predicts material properties instantly-facts, not hallucinations

Johns Hopkins' ChatGPT Materials Explorer predicts material properties, scans papers, and cites domain databases. It beat GPT-4 and ChemCrow on eight tasks.

Categorized in: AI News Science and Research
Published on: Sep 20, 2025
ChatGPT Materials Explorer predicts material properties instantly-facts, not hallucinations

AI Lab Tool Predicts Material Properties Instantly

A Johns Hopkins University engineer has built ChatGPT Materials Explorer (CME), a specialized AI system that predicts material properties, scans literature, and answers domain questions with data-backed reasoning. Findings appear in Integrating Materials and Manufacturing Innovation and suggest faster paths to advanced batteries, tougher alloys, and more.

"ChatGPT Materials Explorer is like having a specialized research assistant who is trained specifically to dig through huge databases, predict how a material or materials will behave without physical testing, sort through scientific papers to find studies relevant to your projects, and even analyze work and assist with scientific writing," says its inventor, Kamal Choudhary, professor of materials science and engineering at Johns Hopkins.

Why it matters

General chatbots can sound confident yet be wrong-a known issue with hallucinations. CME reduces that risk by grounding answers in materials science databases and physics-based models instead of generic web sources.

  • Direct data connections to NIST-JARVIS, Materials Project, and NIH-CACTUS keep information current and field-specific.
  • Updates sync automatically as new papers and datasets are added, improving reliability over time.

Built as a custom GPT, wired to real data

Using the ChatGPT builder, Choudhary defined CME's scope, set guardrails, and connected it to authoritative databases. That integration lets CME return concrete answers-like correct molecular notations and crystal structures-where generic chatbots often default to vague or incorrect responses.

How it performed

In head-to-head checks against GPT-4 and ChemCrow on tasks ranging from simple formulas (e.g., aspirin) to interpreting phase diagrams, CME answered all eight prompts correctly. The other models produced five correct answers. While small, the test highlights the impact of domain data and physics models on accuracy.

What this means for your lab

  • Query materials databases in natural language and get citations you can verify.
  • Predict properties and screen candidates before committing to experiments.
  • Summarize and triage literature aligned to your system, phase space, or target property.
  • Draft sections of methods, results, or figure captions with references to underlying data.
  • Plan next-step simulations; future versions aim to integrate advanced modeling workflows.

What's next

Development is underway to add advanced materials modeling, automated literature reviews, and expanded analysis. An open-source companion, AtomGPT, is also in progress to let selected users modify code and improve field coverage.

For researchers experimenting with building their own domain-specific assistants, see this overview on creating and evaluating custom GPTs: Custom GPTs resources.