AI, Disinformation, and Trust: What Morocco's Regulators Want You to Know
Artificial intelligence is no longer a side note in the disinformation problem-it's a force multiplier. Speaking at a conference in Rabat, Latifa Akherbach, President of Morocco's High Authority for Audiovisual Communication (HACA), warned that AI is amplifying media falsehoods and eroding trust in news, especially in the absence of strong legal controls and global governance.
"The increasing use of artificial intelligence within society and newsrooms has become a key factor in amplifying media disinformation and undermining trust in news," she said. Her point was direct: media systems are exposed, societies are vulnerable, and the legal response is lagging behind the technology.
How Moroccans Get Their News (and Why That Matters)
National research from the National Telecommunications Regulatory Agency (ANRT) shows most citizens now operate inside a permanent digital ecosystem. The mix is shifting-and that shift has consequences for how falsehoods spread.
- 66.3% rely on television for news
- 26% get news from social media
- 4.7% turn to online journalism
- Print and radio sit near 1% each
According to the Reuters Institute's Digital News Report, roughly 78% of Moroccan internet users access news via digital platforms, with YouTube (49%) and Facebook (47%) leading the pack-networks that can accelerate misleading content at scale. See the report overview at the Reuters Institute, and ANRT's work at ANRT.
Why the Newsroom Filter Still Matters
As professional journalism loses ground as the gateway to information, people miss out on editorial safeguards: fact-checking, prioritizing what matters, and putting news in context. That gap opens the door to influence strategies-domestic and foreign.
Akherbach noted that platforms now wield editorial authority through algorithms driven by commercial logic. She called this "unacceptable" from a human rights perspective: information is a public utility and should serve the public interest with responsibility, transparency, and pluralism.
Morocco's Disinformation Playbook: Recent Patterns
Morocco is facing more sophisticated disinformation campaigns. Examples include false notifications and conspiracy narratives during and after COVID-19, fabricated ballot papers during the 2021 elections, fake official statements about public affairs, and streams of misleading content tied to the Al Haouz earthquake.
Western Sahara is a key pressure point. Tactics range from fake statistics and distorted maps to positions falsely attributed to countries and international organizations, plus images pulled out of context, fabricated videos, and emotionally charged narratives. These messages are often amplified in multiple languages by automated or anonymous accounts.
What Legal, PR, and Communications Teams Should Do Now
Policy and Governance
- Publish an AI use policy: disclosure rules for AI-assisted content, provenance checks before publication, and clear no-go zones for synthetic media.
- Adopt content authenticity standards (e.g., C2PA-style provenance) and require partners to do the same; keep audit trails for claims and source material.
- Build a takedown playbook covering defamation, impersonation, privacy, IP, and election rules; pre-draft notices for major platforms.
- Clarify consent and rights for voice, image, and likeness to counter deepfakes of executives and public figures.
Monitoring and Response
- Stand up a cross-functional "disinfo desk" (legal, comms, security, policy) with 24/7 monitoring across social, messaging apps, and video platforms.
- Create an escalation matrix by severity and source (coordinated networks, anonymous accounts, state-linked narratives).
- Maintain pre-approved holding statements and Q&As; rehearse dark-site activation for crises.
- Deploy verification steps for suspected deepfakes: media forensics, reverse image/video search, context comparison, on-the-record confirmations.
- Protect your brands and leaders against impersonation: verified accounts, reporting channels, and spoofing detection.
Coordination with Platforms and Regulators
- Engage with platform policy teams in advance-set up trusted flagger pathways and contacts for high-velocity incidents.
- Track ad libraries and recommendation trends around elections and national events; document evidence for faster action.
- Align with national regulators and independent bodies on lawful, proportionate responses that reduce harm without restricting freedoms.
Build Societal Resilience
Akherbach called for shared responsibility: citizens, educators, regulators, and platforms. Media education must be a long-term capability, not a one-off campaign.
- Fund and run media literacy programs with schools, universities, and NGOs; give people simple tools to analyze sources and spot manipulations.
- Partner with newsrooms and fact-checkers; co-produce explainers on verification, election integrity, and crisis information.
- Conduct drills that simulate coordinated disinformation surges around sensitive topics (public health, elections, disasters, national sovereignty).
For teams building practical AI skills to strengthen verification workflows and reduce risk, see curated options by role at Complete AI Training.
Platforms, Policy, and the Way Forward
Akherbach urged digital platforms to adopt responsible policies now, given their direct impact on public debate and electoral integrity. The objective isn't to regulate technology itself, but to regulate its uses-protecting rights while enabling innovation.
The takeaway for Legal, PR, and Communications leaders is clear: reduce exposure, raise verification standards, and prepare your incident response before you need it. Build trust on purpose-and defend it with process, policy, and proof.
Your membership also unlocks: