Putting AI in Its Place: EMRC's Mysuru Workshop Champions Human Judgment in Research

UoM's EMRC held a workshop on AI in research, urging speed with human oversight. Use tools for search, code, and drafts-then verify, disclose, and keep ethics tight.

Categorized in: AI News Science and Research
Published on: Dec 29, 2025
Putting AI in Its Place: EMRC's Mysuru Workshop Champions Human Judgment in Research

AI and Research Tools Workshop at EMRC, University of Mysore: What Researchers Should Take Away

December 28, 2025, Mysore/Mysuru - A two-day workshop on Artificial Intelligence and Research Tools, organised by the Educational Multimedia Research Centre (EMRC), University of Mysore, in association with the Malaviya Mission Teacher Training Centre (MMTTC) and supported by Rashtriya Uchchatar Shiksha Abhiyan (RUSA), kicked off on Dec. 22 at the EDUSAT Conference Hall.

The sessions focused on practical use of AI in research, with a clear message: use AI to move faster, but keep the researcher in charge. Speakers underscored the value of critical thinking, validation, and ethical use as AI tools mature.

What the speakers emphasized

  • Prof. D.S. Guru (Department of Studies in Computer Science, UoM): AI tools are helpful but still early. They should support-not replace-the researcher. AI can process information, but it lacks common sense; scholars must evaluate and verify every output.
  • Chief Guest Prof. H.P. Jyothi (Director, MMTTC, UoM): AI use in research is necessary, but it won't make a weak project strong. She contrasted pre-AI research workflows with current tools that speed up review and analysis, while reminding scholars that a Ph.D is a degree-research is a lifelong practice.
  • Prof. M.S. Sapna (Director, EMRC): The workshop aimed to give scholars direct exposure to AI tools and methods. EMRC serves both as a media production centre and a research centre, bridging researchers with emerging technologies.

Practical takeaways you can apply this week

  • Use AI for search and synthesis-then verify: Start literature scans with AI-assisted tools to map topics and find gaps. Always read primary sources and track claims back to papers.
  • Keep a human-in-the-loop standard: For any AI-generated summary, code, or statistic, add a manual check. Document what was AI-generated, what you edited, and final decisions.
  • Protect data and ethics: Don't paste sensitive data into public models. Follow your IRB and departmental policies; keep logs of prompts, versions, and datasets.
  • Prioritize reproducibility: Save prompts, model versions, parameters, and outputs alongside your code and datasets. This lets peers audit your process end to end.
  • Be explicit in manuscripts: If AI tools assisted with summarization, coding, or language edits, disclose how they were used and where you validated results.
  • Use AI for drafting and editing, not conclusions: Let models help with structure, clarity, and formatting. Reserve interpretation of results and claims for humans.

Suggested tool categories to evaluate

  • Literature discovery and mapping: tools that surface related work, summarize abstracts, and suggest citations.
  • Reading and summarization: paper summarizers, highlight extractors, and Q&A on PDFs.
  • Reference management: citation managers with AI-aided note syncing and deduplication.
  • Data, code, and analysis: coding assistants for Python/R, notebook copilots, and unit-test generation-paired with manual review.
  • Writing and editing: style refiners, grammar aids, and formatting helpers that keep technical precision.
  • Transcription and media: speech-to-text for interviews and seminars; captioning for research talks.

Workflow that keeps you fast and accurate

  • Plan: Define the research question, scope, and exclusion criteria before you query any model.
  • Search: Use AI to generate keyword sets and paper lists; export to your reference manager.
  • Synthesize: Summarize clusters, compare methods, and log uncertainties that need manual checks.
  • Analyze: Let coding assistants propose pipelines or tests; validate with your own diagnostics.
  • Write: Draft sections with AI support for clarity; you own the arguments and conclusions.
  • Audit: Re-run key steps, verify citations line-by-line, and document AI involvement.

For policy guidance on responsible AI use in scholarly work, see the COPE guidance on AI use in scholarly publishing and Nature policy on AI and LLMs.

Want structured, job-focused training to build these skills? Explore curated options here: AI courses by job.

Bottom line: AI can speed up literature work, coding, and drafting. Your edge comes from judgment-choosing the right questions, verifying results, and communicating them with clarity.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide