Global South faces critical gaps in AI readiness, study warns
A new academic review identifies education and digital competence as the primary barriers to safe AI adoption in regions already marked by inequality. The study, published in World, argues that the real challenge isn't artificial intelligence itself-it's how unprepared institutions and citizens are to engage with it critically.
Researchers examined how societies can navigate AI-driven transformation without deepening social exclusion or eroding human agency. They found a consistent pattern: individuals and institutions with higher levels of critical and ethical digital competence are less vulnerable to AI-related risks such as uncritical reliance on automated systems, exposure to biased algorithms, and loss of autonomy.
Digital competence is now a frontline defense
The study defines digital competence as more than basic technical skills. It integrates algorithmic literacy, critical data awareness, ethical reasoning, and socio-emotional capacities.
This broader definition reflects a shift in how digital skills matter. Earlier frameworks focused on access and usability. But algorithmic systems now shape access to information, influence behavior, and automate decisions once made by humans. Without the ability to critically engage with these systems, users risk becoming passive participants in processes they neither understand nor control.
Digital competence functions as a protective mechanism. It enables individuals to maintain agency, evaluate the reliability of AI outputs, and resist manipulative design features embedded in digital platforms.
Policy rhetoric doesn't match implementation
While the importance of digital competence is widely acknowledged in policy discourse, the study finds a significant disconnect between what governments say and what they do. Many international and regional frameworks emphasize employability and technical skills while giving limited attention to ethical governance, algorithmic accountability, or socio-emotional learning.
This imbalance is particularly evident in the Ibero-American context, where rapid digital transformation coexists with persistent structural inequalities. Despite improvements in connectivity, disparities in educational quality, institutional capacity, and access to digital resources continue to shape how AI is adopted and experienced.
Educational systems may produce technically skilled users who lack the critical awareness needed to navigate complex AI-driven environments. This creates conditions where technology adoption outpaces the development of governance mechanisms and civic capacity.
Teacher education emerges as a critical weak point
Educators play a central role in shaping how students interact with digital technologies, yet many training programs remain focused on instrumental uses of technology. Ethical considerations, critical data practices, and AI-related risks are often underrepresented.
This gap limits the ability of educators to foster reflective and responsible engagement with AI among learners. For educators looking to address this gap, resources like the AI Learning Path for Teachers provide structured modules combining technical understanding with ethical reflection.
Without deliberate intervention, AI-driven systems may deepen dependency on external technologies and marginalize communities that lack the resources to engage with them critically.
A five-part framework targets specific risks
Researchers propose a framework that integrates five core dimensions of digital competence:
- Algorithmic literacy - helps users understand how AI systems function and recognize potential biases
- Critical data awareness - enables individuals to question data sources and understand privacy implications
- AI ethics and governance - introduces principles such as transparency, accountability, and fairness into decision-making processes
- Human-AI collaboration skills - focuses on maintaining human oversight and avoiding over-reliance on automated systems
- Civic and socio-emotional capacities - emphasizes empathy, ethical responsibility, and active participation in digital societies
The framework is designed to be relational rather than additive. These dimensions reinforce one another, creating an approach to digital competence that aligns with sustainable and inclusive AI integration.
Implementation requires curriculum and institutional change
The study outlines practical pathways for implementation: integrating AI-related topics into curricula, developing interdisciplinary training programs, and establishing institutional policies that promote ethical AI use.
In teacher education, researchers suggest structured modules that combine technical understanding with ethical reflection and collaborative learning. Educational institutions can explore resources like AI for Education to guide curriculum development.
The research also challenges the dominant emphasis on employability in digital skills discourse. While adaptability and technical proficiency remain important, they must be complemented by ethical and civic capacities.
The focus shifts from controlling AI to strengthening institutions
AI itself is not inherently harmful. The risks emerge when educational systems, governance structures, and institutional capacities fail to keep pace with technological change. This perspective shifts the focus from controlling AI to strengthening the human and institutional capabilities needed to guide its development and use.
Authors call for more empirical research to test and refine the proposed framework, and for intervention-based studies that examine how digital competence can be developed over time and how it influences behavior in real-world contexts.
Your membership also unlocks: