AI systems in humanitarian aid create risks organizations aren't prepared for
Humanitarian organizations are adopting AI tools faster than they can manage the risks, according to research released this week. The haphazard integration of algorithmic systems into aid operations-driven by individual workers using large language models and NGOs deploying chatbots-is leaving vulnerable populations exposed during crises.
A new report examined how AI enters the humanitarian sector mostly unvetted. Researchers conducted more than 70 interviews across humanitarian, public, academic, and private sectors to map the adoption patterns. What they found: algorithmic systems are seeping into internal operations without adequate safeguards, legal frameworks, or human rights protections.
When AI systems fail, consequences are immediate
A U.S. nonprofit's chatbot recently went off-script after a product update activated unexpected AI features. The system began delivering misleading and harmful responses to vulnerable users. The same failure in a humanitarian chatbot used during a crisis or conflict could direct people seeking lifesaving information toward dangerous advice.
The risks extend beyond chatbots. AI-powered digital tools can enable surveillance, contribute to unlawful targeting of civilians in conflict, and automate life-or-death decisions without meaningful human oversight or recourse.
Tech companies are driving adoption, not organizations
Much of the AI uptake in humanitarian aid comes from two directions: aid workers using generative AI and LLMs for daily tasks, and tech companies pushing AI-enhanced features into existing products. Organizations often lack control over when and how these capabilities activate.
This creates what the research describes as "corporate capture"-a pattern where even large international organizations become dependent on cloud-based systems and locked into relationships with Big Tech vendors.
Smaller organizations face the steepest barriers
Global Majority aid workers and grassroots organizations are experimenting with AI adoption at the frontlines. Yet they lack the resources, legal expertise, and funding for IT infrastructure that larger organizations possess. This deepens the existing divide between well-resourced agencies with direct access to tech companies and smaller organizations struggling to maintain principled operations and protect community rights.
Open-source systems and co-development with trusted providers appear to minimize algorithmic risks. But most humanitarian actors lack the budget and technical capacity for this approach.
What operations leaders should do
The research offers specific recommendations for humanitarian organizations:
- Donors should push for changes in tech governance and procurement processes
- Regulators should strengthen transparency and due diligence requirements across algorithmic supply chains
- Organizations should adopt governance frameworks that connect procurement, cybersecurity, protection, and human rights
- Tech companies should invest in humanitarian expertise and rebuild trust with aid organizations and communities
- Local actors should champion local tech solutions and conduct algorithmic audits on proposed systems
- Research and cybersecurity groups should analyze tech adoption through the lens of NGOs in low-income and conflict-affected areas
For operations professionals, the core issue is straightforward: AI for operations requires deliberate institutional decisions, not reactive adoption. Without governance frameworks that prioritize human rights and due diligence, organizations risk the unintended consequences that emerge when algorithmic systems fail in the field.
Your membership also unlocks: