Sharing Harmful AI Content Can Land You in Court, Yadudu Tells Nigerians

Share harmful AI content in Nigeria and you could face prosecution, Yadudu warns. Move fast-preserve evidence, push takedowns, and target the source and the spreaders.

Categorized in: AI News Legal
Published on: Oct 26, 2025
Sharing Harmful AI Content Can Land You in Court, Yadudu Tells Nigerians

Harmful AI Content Is Prosecetable in Nigeria - And Reposting It Can Land You in Court

At the 6th Kano Social Influencers Summit, Auwalu Yadudu, former Minister of Justice and Attorney-General of the Federation, delivered a blunt message: Nigerian law already provides enough hooks to prosecute people who create, broadcast, and rebroadcast AI-generated content that harms individuals or the country.

His point was clear: "Rebroadcasting" harmful content isn't harmless. If you share it, you can be liable.

AI Isn't a Legal Person. Owners Are Accountable.

Yadudu drew a firm line between human personality and AI. Humans have physical existence and legal personality. AI doesn't. What AI generates is ultimately tied to the entities that own and deploy it.

That accountability, however, gets complicated when those owners are foreign platforms. Holding a US or EU company responsible in Nigeria is possible, he argued, but costly and process-heavy.

Legal Hooks You Can Use Today

  • Cybercrimes (Prohibition, Prevention, etc.) Act 2015: Useful for harmful online conduct, including content that threatens, intimidates, or causes harm through computer systems. See the Act text: official PDF.
  • Criminal/Penal Codes and Defamation: False statements that damage reputation remain actionable. Many prosecutions have turned on publication and re-publication.
  • Data Protection: Unlawful processing and dissemination of personal data in AI outputs can trigger regulatory and civil consequences under current data protection norms.
  • Evidence Act (electronic evidence): Digital records are admissible if you get preservation and authenticity right.
  • Platform Compliance Codes: Nigeria's policy instruments for large platforms support takedown requests and cooperation. Use them to accelerate removal and preserve audit trails.

The Hard Problem: Cross-Border Enforcement

Can Nigeria hold foreign AI and platform companies liable here? Yes, in principle. In practice, expect jurisdictional fights, service out of jurisdiction, conflict-of-law issues, and steep costs. The process can be slow.

Yadudu pointed to a past incident where former President Muhammadu Buhari was misrepresented via AI content, and a platform ban was floated as a response. Blunt blocks rarely solve the evidence or accountability problem. Precision beats theatrics.

If You're Litigating: Move Fast and Preserve Everything

  • Preserve evidence: Capture URLs, timestamps (UTC), full-page screenshots, hashes, and metadata. Don't rely on links alone.
  • Takedown and preservation letters: Send to creators, reposters, and platforms. Demand preservation of logs, upload IPs, device fingerprints, and account data.
  • Interim relief: Seek targeted injunctions against further publication and orders for platform cooperation and disclosure.
  • Trace the chain: Map originators and amplifiers. Reposting can create separate liabilities and better local defendants.
  • Budget for cross-border steps: Consider parallel actions in the platform's home forum if leverage in Nigeria stalls.

If You're In-House: Reduce Exposure Before It Starts

  • Zero-tolerance policy on sharing unverified content: "Do not repost" is a rule, not a guideline.
  • Approval flows for AI-assisted content: Require human review, fact checks, and legal sign-off for sensitive topics.
  • Vendor and platform clauses: Logging, audit rights, takedown SLAs, and indemnities for harmful outputs.
  • Incident playbook: Evidence preservation, simultaneous legal and PR response, and pre-drafted notices.
  • Training: Teach staff that forwarding harmful content can be actionable. No "I just shared it" defense.

On AI Dependence and Accountability

Yadudu warned about over-reliance on AI, especially among students who outsource thinking to tools and then dodge accountability. The bigger problem: people using AI to misinform, misrepresent, and inflame, while others mindlessly amplify it.

His caution lands for the legal community too. Use AI with care, verify outputs, and document your review. If your work touches AI policy, risk, or compliance, staying current is part of the job. For structured upskilling, see AI courses by job.

Bottom Line

Nigerian law already covers harmful AI content. It also covers the people who click "share." The hardest fights are cross-border, but there's enough in the toolbox to act quickly, remove harmful content, and hold the right parties to account.

Be precise. Preserve evidence. Target the origin and the amplification. And build policies that keep your organization out of the firing line.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)