Stop Development Project Failure: Use a People-Centered AI Playbook
Too many AI-for-development projects fail for the same reason: teams lead with tech and skip the human work. The People-Centered AI Playbook from Dalberg Data Insights offers a clear fix. It starts with users, workflows, and organizations-not models or buzzwords. If you build for real conditions first, the tech choice becomes obvious.
The Tech-First Trap
Importing methods from high-resource contexts rarely works in low-resource environments. Bandwidth, devices, and data are part of the puzzle, but they're not the choke point. The real bottleneck is people: trust, capacity, incentives, and fit within existing systems. The playbook makes this explicit and puts the human research up front.
Six Phases That Put People First
- Discover: Weeks of user interviews, workflow mapping, and org assessment before any code. Find the actual constraints.
- Define: Pressure-test if AI is even needed. Compare AI against simpler fixes: process changes, basic tools, training, or policy.
- Design: Prototype with real users in real contexts. Reduce friction and make the workflow obvious.
- Develop: Build the smallest viable system that integrates with current tools and data flows.
- Pilot: Test with clear success criteria, equity checks, and feedback loops. Kill ideas that don't meet the bar.
- Scale: Institutionalize, retrain, and adapt across contexts. Expansion is a systems change, not just "more users."
3 Critical Insights That Challenge Conventional Wisdom
1) AI readiness is about people systems. Most assessments obsess over infrastructure. The smarter move is "people readiness": willingness, skills, incentives, governance, and culture. Tools like Microsoft's responsible AI guidance and GSMA's AI ethics work help, but Dalberg's DART zeroes in on social impact settings. Government-led efforts that integrate with existing services consistently outperform standalone apps because they start with institutions, not features.
GSMA AI Ethics is a solid reference for teams building in sensitive contexts.
2) Problem definition beats solution invention. The Define phase asks a blunt question: is AI the right tool? If the task isn't high-volume, repetitive, or pattern-based-or if a simpler fix works just as well-don't build an AI system. In constrained settings, forcing AI adds cost, risk, and fragility. Let the problem choose the tool, not the other way around.
3) Scaling means building resilient systems. Scale isn't about vanity metrics. It's about policies, training, data stewardship, and continuous adaptation across languages, geographies, and workflows. The playbook pushes teams to move from activity tracking to credible impact evaluation-performance, equity, and cost-effectiveness-before expanding. That's why serious guidance stresses iterative learning over one-off deployments.
Cross-Cutting Enablers: Where the Real Work Happens
- People: Trust, leadership buy-in, skills, and change management. No adoption, no impact.
- Equity & Inclusion: Who is represented in data and testing? Who faces barriers-connectivity, literacy, language, device access? Bake inclusion into every phase.
- Data Governance: Quality, access, privacy, security, and compliance from day one. Ethics is an operating requirement, not an add-on.
Implementation Reality Check
Most teams don't have every skill in-house. That's normal. The playbook recommends partnering smartly and keeping core knowledge internal. Outsource short-term, specialized tasks. Build durable capability where it matters.
- Partner with universities, local tech-for-good groups, or global networks for research, evaluation, and specialized modeling.
- Outsource data labeling and short-run engineering sprints; retain product ownership and user research internally.
- Adopt practical templates: user personas, problem statements, use-case definitions, and feasibility assessments to make decisions fast.
- Set decision gates: adoption targets, accuracy thresholds, bias checks, cost per outcome, and rollback plans.
Interdisciplinary teams win. Pair domain experts with data scientists, ethicists, and service designers. As Stanford HAI has shown, this mix produces solutions that last.
What This Means for Your Practice
You don't need to become an AI cheerleader or a skeptic. You need a method. Start human-first, test whether AI is warranted, build the smallest system that fits the workflow, and scale through institutions with evidence. Treat AI as one tool among many-always justify it against simpler options.
If your team needs role-specific upskilling to execute this approach, browse our curated learning paths by job function: Courses by Job. Or scan the latest programs to fill targeted skill gaps: Latest AI Courses.
Do the human work first. The tech will follow.
Your membership also unlocks: