AI Ethics, Deepfake Governance & Responsible Communications: What Marketers Need to Act On Now
MSL, in partnership with the Advertising Standards Council of India (ASCI), hosted the AI in Advertising and Communications Summit 2026 in New Delhi as the official pre-summit to the India AI Impact Summit 2026 (February 16-20). The focus was simple and urgent: use AI to scale creativity and performance without eroding consumer trust.
The conversations moved past theory. AI is already embedded in creative, media planning, targeting, optimization, and measurement. The core question is execution: how to deploy AI responsibly while protecting brands and consumers.
Why this matters for marketers
- AI is now a day-to-day capability, not an experiment. Your teams need policy, training, and oversight.
- Deepfakes, synthetic endorsements, and manipulated media can damage trust fast. Prevention and disclosure must be built into your workflow.
- Regulators expect accountability, consent, and traceability. The bar is moving from "can we" to "prove we did it right."
What leaders said
"We aim for tangible outcomes that empower the Global South to actively shape AI solutions," said Mr. Mohammed Y. Safirulla K., Director, IndiaAI Mission, on the upcoming India AI Impact Summit. Collaboration, inclusion, and responsible scaling are core priorities.
"We're entering an operational phase of AI," said Mr. Amit Misra, CEO, MSL South Asia. He stressed AI's role in efficiency and effectiveness, paired with governance, ethics, disclosure, and inclusion.
"Consumer protection, transparency, and accountability must remain central," said Ms. Manisha Kapoor, CEO & Secretary General, ASCI. ASCI is testing frameworks to support responsible innovation while safeguarding public trust.
"Regulation should be thoughtful and proportionate," said Mr. Rohit Kumar Singh (Retd. IAS), Former Secretary, Department of Consumer Affairs. A balanced approach that protects privacy and enables experimentation will unlock value for consumers and the economy.
Operational best practices you can implement now
- Governance: Create an AI use policy that covers approvals, data sources, prompts, model selection, and human oversight. Train teams and vendors on it.
- Consent and data ethics: Use clear consent for data collection and personalization. Minimize data and apply purpose limitation.
- Disclosure: Label AI-generated or AI-edited ads and content where consumers could be misled without it.
- Authenticity markers: Use watermarking and content provenance signals across creative workflows. Consider standards such as the Coalition for Content Provenance and Authenticity (C2PA). Learn more
- Human-in-the-loop: Keep editorial review for claims, visuals, and high-risk outputs. Set thresholds for mandatory human checks.
- Bias and safety checks: Test models and outputs for bias, harmful content, and misinformation. Document test results and corrective actions.
- Model and vendor due diligence: Log model versions, data sources, licenses, and usage rights for assets and training data.
- Crisis readiness: Prepare a deepfake response playbook-monitoring, rapid takedown process, and public disclosure protocols.
- Measurement integrity: Audit attribution models and performance data for AI-generated content to prevent inflated or misleading metrics.
Regulatory direction to watch
Deliberations highlighted India's evolving response, including Draft Amendments to the 2021 Information Technology Rules, aligned with global norms on disclosure, watermarking, and provenance, within India's broader AI Governance Guidelines. Expect higher expectations on transparency, source tracing, and consumer notice across ads and communications.
For ongoing guidance on responsible advertising, see ASCI. Visit ASCI
Key panels and focus areas
- Safeguarding Consumers and Building Trust: Accountability, informed consent, disclosures, authenticity markers, and institutional oversight. Speakers included Mr. Rohit Kumar Singh (Retd. IAS), Ms. Hiral Gupta (Bharucha & Partners), and Ms. Manisha Kapoor (ASCI), moderated by Mr. Tushar Bajaj (Organic by MSL).
- Operational Best Practices for AI Integration: Responsible adoption across strategy, creativity, personalization, and execution-emphasizing governance, human oversight, ethical data use, and industry-led self-regulation. Speakers included Ms. Deeptie Sethi (PRCAI), Mr. Madhav Bissa (NASSCOM), and Mr. Arnab Ghosh (MSL), moderated by Mr. Ashraf Engineer (ASCI).
What to do before the India AI Impact Summit 2026
- Run a rapid AI risk review across your ad ops and content pipelines. Close gaps in disclosure, consent, and human review.
- Adopt content provenance and watermarking in your creative toolchain. Document your process.
- Brief leadership on regulatory trends and set a cross-functional AI governance group (marketing, legal, IT, procurement, agency partners).
- Pilot responsible AI use cases with clear success metrics: faster creative iteration, smarter targeting, lower waste-without compromising trust.
The takeaway
The industry's future will be defined by responsible execution. Teams that build for transparency, auditability, and consumer respect will ship faster, earn more trust, and stand up to scrutiny.
Further learning for marketing teams
- Upskill your team on practical AI use in marketing, ethics, and workflow design: AI Certification for Marketing Specialists
- Browse AI courses by marketing roles and skills: Courses by Job
Your membership also unlocks: