AI You Can Trust: From Deep Learning to Discovery with Prashnna Gyawali on Jan. 21

Join a focused session on reliable, trustworthy AI for science with Prashnna Gyawali. Jan. 21, 6 p.m., Clark Hall, Room 208; hosted by Chemistry and the local ACS section.

Categorized in: AI News Science and Research
Published on: Jan 07, 2026
AI You Can Trust: From Deep Learning to Discovery with Prashnna Gyawali on Jan. 21

From Deep Learning to Scientific Discovery: Toward Reliable and Trustworthy AI

Researchers are invited to a focused session on dependable AI for science. The Department of Chemistry and the Northern West Virginia Local Section of the American Chemical Society will host this presentation at 6 p.m., Jan. 21, in Clark Hall, Room 208.

The presenter is Prashnna Gyawali, assistant professor in the Lane Department of Computer Science and Electrical Engineering.

Event details

  • Title: From Deep Learning to Scientific Discovery: Toward Reliable and Trustworthy AI
  • Host: Department of Chemistry and the Northern West Virginia Local Section of the American Chemical Society
  • Presenter: Prashnna Gyawali, assistant professor, Lane Department of Computer Science and Electrical Engineering
  • Time: 6 p.m., Jan. 21
  • Location: Clark Hall, Room 208

Why this matters for your work

As models move closer to the bench, reliability and trust are non-negotiable. This session centers on how to build AI you can test, explain, and defend in peer review.

Expect a practical lens on moving from neural nets to scientific results that hold up under replication.

Key concerns to bring to the discussion

  • Data quality: controls, drift, and documentation
  • Model trust: calibration, uncertainty estimates, and interpretability
  • Reproducibility: versioning, pipelines, and audit trails
  • Integration: pairing predictions with experimental design and validation

Make the most of the session

  • Bring one active research question where AI could speed analysis or guide experiments.
  • Outline how you would verify model outputs in your lab (ground truth, orthogonal methods, or blinded tests).
  • List failure modes you worry about (overfitting, bias, sample leakage) and ask how to test for them.

Further reading and tools

Put this on your calendar and plan to arrive a few minutes early. If you're working at the edge of chemistry, materials, biology, or engineering, this conversation will help you tighten your methods and get cleaner results from your models.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide