"AI Can Make Mistakes": Why Media Literacy Matters in the Algorithmic Age
AI is useful, fast, and confident. It's also fallible. That's the point of UNESCO's "Artificial Intelligence Can Be Wrong" campaign and the focus of Global Media and Information Literacy (MIL) Week, headlined by the "Minds over AI: Media and Information Literacy in Digital Spaces" conference in Cartagena on 23-24 October.
The message is simple: treat AI like a powerful assistant, not a final authority. For people working in education, development, and communications, this isn't optional anymore-it's core practice.
What UNESCO Is Pushing For
UNESCO's MIL agenda sharpens critical thinking through digital literacy, source verification, and a clear view of how algorithms can fail. It encourages universities and organisations to appoint media literacy youth ambassadors, run workshops, and build academic-industry collaboration around fact-checking and accountability.
Resources and policy frameworks focus on access to information, responsible content creation, and sustainable adoption across schools and communities. See UNESCO's overview of MIL initiatives for context and tools.
UNESCO: Media and Information Literacy
The Hidden Risks You Don't See in the Feed
Bias can slip into AI systems through training data and design choices. That shows up in unexpected ways: hate-speech filters that flag disability terms more harshly than slurs, or translation models that wrongly label certain Arabic dialects as offensive.
The lesson: automated moderation and generation need human oversight. Source verification, dataset reviews, and continuous checks on model behavior should start on day one-not after an incident.
How Educators, Developers, and Comms Teams Can Act Now
- Educators
- Make MIL a required component across disciplines, not a one-off elective.
- Teach students to ask: "Who created this, why, and what evidence backs it?" A Harvard GSE panel stressed this simple habit over passive consumption. Harvard GSE: Curiosity and Media Literacy
- Use assignments that require source logs, claim audits, and side-by-side comparisons of AI output vs. verified sources.
- IT and Development
- Run bias and performance audits on datasets and models. Track false positives and negatives by group.
- Implement human-in-the-loop review for moderation and high-impact decisions.
- Document models with clear "model cards," data lineage, and known limitations. Build feedback loops into production.
- Apply privacy-by-default: minimize PII, set retention limits, and restrict access to sensitive data.
- PR and Communications
- Adopt a verification protocol: minimum two independent sources before publishing key claims.
- Disclose AI-assisted content. Keep a review checklist for tone, fairness, and risk of misinterpretation.
- Prepare a mis/disinformation playbook: monitoring triggers, escalation paths, and correction templates.
- Use content provenance methods where feasible and archive evidence for high-stakes posts.
Policy and Governance: Set the Guardrails
"Minds over AI" isn't a slogan-it's a reminder that values drive outcomes. Transparency, accountability, and fairness should guide AI use in education and research.
- Publish clear rules for AI use by role and context (teaching, research, comms, student work).
- Cover privacy, equitable access, algorithmic transparency, academic integrity, and disclosure.
- Stand up an oversight group to review incidents, approve vendors, and update guidelines.
- Maintain a risk register and an incident response workflow for AI-related errors or bias reports.
What's Happening in the Region
Universities across the Arab region are moving MIL into practice through courses, workshops, and student-led initiatives. Some have made MIL mandatory, like Al-Hussein Bin Talal University in Jordan.
Community efforts include projects that train young "journalist-influencers" to fact-check and integrate ethical standards into content on Instagram, YouTube, Substack, and TikTok. Students can accelerate impact by organizing campus events, leading fact-checking clubs, and becoming youth ambassadors.
Professional Practices: Keep Humans in the Loop
Experts point to three core risks in education and media: bias, privacy violations, and misinformation from over-reliance on automation. AI can flag patterns, but it can't make fair judgments without human supervision.
Practical takeaway: pair technical skills with critical thinking and data protection. Integrate technical ethics and verification workflows into everyday work-not as an afterthought.
Build Skills That Stick
- For educators: MIL pedagogy, assessment design for AI-assisted work, and academic integrity enforcement.
- For developers: dataset curation, bias testing, documentation, and privacy engineering.
- For comms teams: verification methods, disclosure practices, and crisis response for false content.
If you're aligning training to job roles in education, development, or communications, explore practical AI course paths here: Courses by Job.
Bottom Line: Minds Over AI
AI will keep making mistakes. Our job is to make fewer of them matter.
Teach people to question sources, audit the systems they use, and keep humans accountable for final decisions. That's media literacy with real teeth-and the only way to make AI serve the public interest instead of undermining it.
Your membership also unlocks: