Panjab University Workshop Prepares Students for Smart, Safe Use of AI

At Panjab University, an AI workshop gave PR and comms teams practical ways to work faster without losing judgment. Core: verify, set policy, label synthetic media, humans decide.

Categorized in: AI News PR and Communications
Published on: Nov 23, 2025
Panjab University Workshop Prepares Students for Smart, Safe Use of AI

AI Workshop at Panjab University: Practical Lessons for PR and Communications Teams

Chandigarh, 21 November 2025 - The School of Communication Studies, Panjab University, hosted an AI workshop in collaboration with Data LEADS under the ADiRA project (AI for Digital Readiness and Advancement). ADiRA Trainer, Jatin Gandhi, led a direct, jargon-free session aimed at helping students and future communicators use AI responsibly and effectively. The program's tagline, "Training India's Workforce to be AI Ready", matched the agenda: clear, practical guidance over hype.

The core message was simple: AI boosts efficiency, but it doesn't replace human judgment, experience or emotional intelligence. For PR and communications teams, that means AI can speed up research and content pipelines, while humans remain accountable for decisions, ethics and brand outcomes.

What was covered

Gandhi broke down what AI is, how it works at a high level and why fluency matters for modern communication work. The session surfaced real use cases across designing, editing, research and content creation. The focus stayed on practical workflows, not technical deep dives.

Equally important: the risks. AI can produce incorrect or misleading outputs. Deepfake videos, misuse of celebrity likenesses and misinformation were discussed in detail. The takeaway for communicators was clear-verification beats velocity.

Why it matters for PR and Communications

AI changes how fast teams can move, how much they can ship and how they manage reputation risk. If your team uses AI for briefs, drafts, visuals or analysis, you need standards: fact-checking, consent for synthetic media, clear disclosure and a plan for incident response.

Human oversight remains the last line of defense. Treat AI as a skilled assistant, not an authority.

Key takeaways for communication leaders

  • Set an AI policy: Define approved tools, disclosure rules for AI-generated content and guardrails for sensitive topics.
  • Adopt a verification checklist: Always cross-check facts, sources and attributions before publishing. Don't rely on a single AI output.
  • Manage synthetic media risk: Get consent for likenesses, label manipulated or AI-generated visuals and keep a record of prompts and edits.
  • Prepare for deepfake incidents: Create a rapid response protocol covering detection, escalation, legal and public statements.
  • Train your team: Teach prompt hygiene, bias awareness and ethical use. Make it recurring, not one-off.
  • Keep humans in the loop: Final reviews, tone checks and context decisions stay with experienced communicators.

Practical use cases highlighted

  • Design and editing: Faster mockups, versioning and layout options to support campaign iterations.
  • Research support: Draft outlines, summarize long docs and compile source lists-then verify with trusted references.
  • Content creation: First drafts, variations and ideation for press notes, social copy and FAQs-followed by human refinement.

Risk and ethics, addressed head-on

The workshop stressed that convenience without responsibility is a liability. AI can fabricate citations, misstate facts or produce convincing but false multimedia. The group discussed how to spot red flags and slow down at critical moments.

For teams building policy, frameworks like the NIST AI Risk Management Framework and guidance such as Partnership on AI's synthetic media practices offer useful reference points.

Engagement and format

The session opened with QR-based registration and closed with a feedback form and group photograph. Students kept the discussion active with questions on responsible use, fact-checking routines and career impact.

What your team can do next

  • Audit your current AI use and close gaps in verification and disclosure.
  • Draft or update an AI policy-keep it concise, practical and reviewed quarterly.
  • Run a tabletop exercise for a deepfake or misinformation scenario to pressure-test your response plan.
  • Upskill your team with focused, role-specific training. If you need structured options, see curated paths by job at Complete AI Training.

Bottom line: AI can accelerate PR and communications work, but the brand still bears the risk. Combine smart tools with strong guardrails, and keep humans in charge of the final call.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide