Nigeria's NITDA backs National Crisis Communication Hub to counter AI misinformation and deepfakes ahead of 2027 elections

NITDA backs a National Crisis Communication Hub to curb fake news and AI-led misinformation ahead of 2027. The plan adds training, platform ties, and inclusive, faster response.

Categorized in: AI News PR and Communications
Published on: Jan 20, 2026
Nigeria's NITDA backs National Crisis Communication Hub to counter AI misinformation and deepfakes ahead of 2027 elections

NITDA Backs National Crisis Communication Hub to Counter AI-Driven Misinformation

The Director-General of Nigeria's National Information Technology Development Agency (NITDA), Kashifu Inuwa, has endorsed a National Crisis Communication Hub (NCCH) to confront fake news, hate speech, and AI-driven misinformation. The move follows the maiden National Symposium on Digital Innovations in Crisis Communication and signals a push for a coordinated, nation-level response.

Inuwa warned that misinformation spreads faster than verified facts because of its "novelty factor." His point was simple: content that looks new or shocking outpaces the truth, and that gap hurts trust, markets, and public safety. The fix, he said, is credible action and stronger partnerships.

Why this matters for PR and communications

With the 2027 political season approaching, deepfakes and automated propaganda will likely intensify. That raises the bar for verification, speed, and ethics across media, brand, and government communications. A coordinated hub gives communicators a clearer pipeline for detection, escalation, and response-especially during sensitive periods.

What NITDA outlined

  • Skills and training: Strengthen digital literacy and professional development for journalists, media teams, and security spokespersons, including AI-content detection and ethical reporting. Platforms like Cisco NetAcad were highlighted as part of the plan.
  • Regional engagement: Expand crisis-communication conversations through symposiums across all six geopolitical zones to drive grassroots awareness and participation.
  • Platform collaboration: Build structured engagement with global tech companies for faster categorisation and takedown of content that poses national security risks.
  • Cybersecurity integration: Work closely with cyber units across critical institutions to establish multi-layer defence against digital threats.

How the proposed hub would operate

Major-General Chris Olukolade (Rtd.), Chairman of the Centre for Crisis Communication (CCC), said the hub would be an independent, multi-stakeholder platform. Its mandate: monitor and counter harmful content during high-risk periods like elections, while protecting democratic principles and freedom of expression.

He also backed citizen-facing tools-such as specialised mobile apps-to enable real-time reporting of crimes and emergencies. The goal is to turn social platforms into early-warning channels and practical public safety tools.

Inclusion is non-negotiable

NITDA and the CCC agreed that digital innovation must include persons with disabilities and other marginalised groups. Emergency alerts and crisis information should be accessible by design-format, language, and channels.

Governance and next steps

NITDA and the CCC will set up a joint working team to document agreements and drive implementation throughout 2026. The aim is clear: position the NCCH as a cornerstone of Nigeria's digital resilience against misinformation and emerging information threats.

Key quotes

"There is a direct correlation between novelty and virality," Inuwa said. "Misinformation is often packaged as something new or shocking, which allows it to outpace accurate information. The way forward is to build public trust through credible government action and strong, strategic partnerships."

What PR leaders should do now

  • Stand up verification workflows: Formalise source checks, AI-detection steps, and escalation paths for suspect content.
  • Prepare a deepfake response plan: Pre-approve statements, designate spokespeople, and set thresholds for legal, platform, and law-enforcement escalation.
  • Pre-bunk and debunk: Build short, repeatable explainers on common false narratives. Publish early. Update often.
  • Tighten platform relationships: Identify contacts at major platforms for fast takedowns and labelling requests when policy violations occur.
  • Strengthen social listening: Track velocity, sentiment, and spread patterns. Set triggers for cross-functional war rooms.
  • Accessibility first: Ensure alerts and public guidance are available in multiple formats and languages, including options for low-bandwidth and assistive tech users.
  • Train your teams: Build muscle memory with tabletop exercises and scenario drills that include AI-generated content.

A note on AI and content provenance

As synthetic media scales, standards for disclosure and provenance will matter more. Consider aligning your internal guidelines with emerging industry frameworks, such as responsible practices for synthetic media from groups like the Partnership on AI. See: PAI's Synthetic Media resources.

Helpful resources

The signal here is strong: coordinated governance, faster collaboration with platforms, and hands-on training are moving from "nice to have" to core infrastructure. For communications leaders, the advantage will go to teams that practice now, document clearly, and respond with speed and integrity when the next wave hits.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide