AI Deepfakes Spark Misinformation and Privacy Fears, Warns South African Regulator at M20 Summit
The South African Information Regulator warns AI in media fuels disinformation, privacy breaches, and public distrust. Deepfakes pose serious risks to democracy and journalism.

The South African Information Regulator Raises Concerns Over AI in Media
The South African Information Regulator has expressed serious concerns about the increasing use of artificial intelligence (AI) in media. Emerging technologies such as deepfakes are contributing to disinformation, fake news, and violations of privacy. These concerns were highlighted by Advocate Pansy Tlakula, Chairperson of the Information Regulator, during the second day of the M20 Summit held in Melrose Arch, Johannesburg.
AI and the Challenge of Disinformation
The M20 Summit addresses the critical role of journalism and media in maintaining information integrity. However, AI-generated content complicates this mission. Deepfake technology, which manipulates images, video, and audio, presents several risks:
- Violations of personal privacy when faces or voices are misused
- Creation of false narratives aimed at influencing public opinion
- Decline in trust toward journalism and democratic institutions
“Whoever’s image is used or voice, because image is personal information, the person whose image is used to spread disinformation has their privacy affected,” Tlakula emphasized.
Global Focus on Information Integrity
This issue is not unique to South Africa. Around the globe, governments, regulators, and media stakeholders face challenges such as:
- AI-generated fake news spreading misinformation rapidly
- Identity theft and politically motivated campaigns using deepfakes
- Growing public distrust in traditional journalism and online platforms
Tlakula praised the M20 Summit for prioritizing information integrity, calling it a critical topic worldwide. “Information integrity is something the whole world is talking about, and I am pleased that the M20 is focusing on it,” she said.
Why It Matters
Manipulation of media content powered by AI carries significant social, political, and economic consequences:
- Undermining democracy: Disinformation campaigns can sway elections and policy debates.
- Threatening public trust: Citizens may lose confidence in news outlets and social media platforms.
- Challenging governance: Regulators and journalists struggle to distinguish fact from AI-generated fabrications.
Looking Ahead
The M20 Summit gathers journalists, regulators, and policymakers to discuss practical solutions, such as:
- Stronger privacy protections to secure personal data
- Ethical frameworks guiding AI use in journalism
- Cross-border cooperation to tackle AI-driven misinformation
As South Africa seeks to balance technological progress with privacy rights and accurate information, experts warn that restoring public trust will demand a mix of policy, education, and technology safeguards.
For communications professionals aiming to stay informed on AI’s impact in media and learn how to address related challenges, exploring targeted AI courses can be valuable. More information on AI training options is available at Complete AI Training.