Meet Deepshikha Bhati, the Kent State Stark lecturer making AI transparent and helping first-gen students thrive

Kent State at Stark lecturer Deepshikha Bhati focuses on human-centered, explainable AI for education, health care, and industry. She blends clarity, trust, and hands-on teaching.

Published on: Dec 29, 2025
Meet Deepshikha Bhati, the Kent State Stark lecturer making AI transparent and helping first-gen students thrive

Meet Deepshikha Bhati: A Kent State at Stark Lecturer Focused on Human-Centered AI

Deepshikha Bhati is a full-time lecturer in the Computer Science Department at Kent State University at Stark. Her work centers on explainable AI and how to build systems people can trust. She's especially interested in applying AI across education, health care, and industry-with clarity, accountability, and real value.

From Computer Vision to Explainable AI

Bhati's path started with a simple curiosity: how do complex systems actually work, and how can they solve everyday problems? Graduate work in computer vision and face recognition pulled her deeper into AI. Over time, she zeroed in on explainable AI (XAI) to make models transparent and understandable to non-experts. That thread runs through AI Research and her work today: useful AI that respects people and context.

Academic Roots and Roles

Originally from India and now 34, Bhati graduated from Vivekanand School in Anand Vihar, Delhi. She earned both a bachelor's and a master's degree from Dr. A.P.J. Abdul Kalam Technical University in Lucknow and is pursuing a doctorate in computer science at Kent State. Her Kent State journey began in 2017 as a teaching assistant, followed by roles as a graduate assistant (2018-2021) and part-time instructor (2021-2022).

Since fall 2022, she has taught full time at Kent State Stark as a lecturer. Along the way, she received the John and Fonda Elliot Design Innovation Faculty Fellowship and the Teaching Scholars Program Fellowship-opportunities that let her blend AI, hands-on teaching, and cross-disciplinary projects.

Where AI Will Help Most in the Next Five Years

Bhati points to three areas where AI will move the needle: education, health care, and industry.

  • Education: Scalable personalization-adaptive feedback, tutoring support, and tools that build genuine problem-solving skills instead of grade-chasing. Educators stay in control while AI handles the busywork and highlights learning gaps.
  • Health care: Better notes, triage support, and earlier detection-always with clinicians in the loop. Ethical guardrails and clear explanations matter here. For context on standards and safety, see guidance from the World Health Organization on AI in health.
  • Industry: AI copilots for coding, workflow automation, and cybersecurity. Expect AI woven into customer support, analytics, and operations, with a focus on speed, security, and access.

Underneath all of this is a simple rule: human-centered AI-transparent, accountable, and trustworthy. For principles on explainability, the NIST AI Risk Management Framework is a solid starting point.

The Next Big Shift: Multimodal and Responsible by Default

Bhati expects multimodal AI-text, images, audio, and video working together-to become standard. That opens up more natural interfaces and stronger problem-solving. She also sees interpretability and ethics moving from "add-on" to "built-in," especially in sensitive domains.

Teaching That Builds Confidence

The most rewarding part of her job: watching students believe in themselves. Many of her students are first-generation, balancing school, work, and family. Seeing them grow from hesitant to capable-and then lead research or project work-is the reason she teaches. Mentorship and project-based learning play a big role in that growth; educators can explore the AI Learning Path for Training & Development Managers for structured approaches.

Life Outside Work

Travel, painting, and gardening help her reset. Cooking gives her a clean break from the screen. Ironically, some of her best ideas show up when she steps away from the desk.

Practical Takeaways for Educators and Researchers

  • Bake explainability into assignments-ask students to justify model outputs for real users, not just hit accuracy targets.
  • Pilot AI for feedback at scale: code review, rubric-aligned comments, and formative checks that free you to coach higher-level thinking.
  • Keep humans in the loop for health and safety projects; document assumptions, data lineage, and failure modes.
  • Use AI copilots for coding and documentation, but set usage policies, reviewer workflows, and audit trails.
  • Evaluate fairness and drift regularly; treat monitoring as part of the research design, not an afterthought.
  • Lean into cross-disciplinary teams-pair technical talent with domain experts and end users early.
  • If you want structured learning paths for roles across academia and industry, explore curated options by job role such as the AI Learning Path for Project Managers.

Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)