Capgemini builds agentic AI assistant with UNICEF to connect Brazilian youth to green careers
Capgemini has developed an AI-based career assistant that gives young people in Brazil round-the-clock guidance to find green learning paths and local sustainability jobs. It uses a multi-agent system to interview, assess skills, detect gaps, and match users with verified opportunities.
The tool emerged from Capgemini's Global Data Science Challenge and will support UNICEF's Green Rising initiative, which has already mobilized millions of youth with practical climate skills. The winning solution will be released under an open-source license to encourage reuse and local adaptation.
Why it matters for builders
Most young people want climate-focused work, yet only 44% feel prepared. A focused, verifiable AI assistant can close that gap by connecting intent to local training and real jobs-without bloated UI or vague advice.
Early evaluation using AI-simulated personas shows close to 80% success in matching users to relevant jobs and training paths, with strong UX feedback. That's a promising signal for a production pilot.
What's inside the assistant
- Multi-agent orchestration: Roles for intake interviewing, skills/interest inference, recommendation, verification, and safety. Each agent has a clear contract to reduce ambiguity and drift.
- Knowledge graph + generative dialogue: Natural conversation feeds structured facts into a knowledge graph, improving traceability and ranking of matches. The system exposes its reasoning so users see why roles are suggested.
- Local data grounding: Green job listings and training programs validated by UNICEF Brazil anchor recommendations to real options, not generic templates.
- Infra and models: AI models from Mistral AI and cloud orchestration from AWS were used in the challenge. The stack favors modular services for retrieval, scoring, and guardrails.
For IT and development teams: how to build something similar
- Define the ontology: Map green job families, skills, certifications, and training providers. Keep it small, auditable, and versioned.
- Ground the data: Ingest local job feeds and training catalogs. Add human validation hooks with program partners to keep entries current.
- Design agent roles: Split concerns-intake, profiling, recommender, verifier, safety. Use function calling and typed schemas for stable handoffs.
- Verifiability by default: Store recommendation chains (source docs, ranking scores, reasoning summaries). Return "why this match" to the user.
- Evaluation harness: Create synthetic personas and scripted flows to test precision/recall, fairness across regions, and UX satisfaction. Track regressions with CI.
- Safety and privacy: Minimize PII, apply consent gates, and restrict storage duration. Add toxicity filters, age checks, and escalation paths to human support.
- Localization: Prioritize Portuguese, regional terms, and offline-friendly UX for low-connectivity areas. Provide fallback routes when no match exists.
Results so far
The prototype shows near 80% success at matching users to relevant green jobs and training paths in simulated testing. It also surfaces skill gaps and offers local learning routes to close them.
The team combined conversational AI with a knowledge graph to keep dialogue natural while making decisions traceable. That blend avoids opaque suggestions and helps build trust with young users and program partners.
What's next
Once refined and tested with real users, the assistant will support the scale-up of UNICEF's Green Rising, which has already engaged tens of millions of young people. The open-source release should enable NGOs, schools, and municipalities to adapt the system to local markets.
For teams planning similar builds, invest early in data validation loops, transparent recommendations, and measurable outcomes. It shortens iteration cycles and builds institutional trust.
Developer takeaway
- Agentic systems deliver real value when they are grounded in verified data and return explanations, not guesses.
- Knowledge graphs still matter-especially for skill/job ontologies and stable matching logic.
- Evaluation with synthetic personas is a fast, cheap way to uncover failure modes before pilots.
Upskilling resources
If you're leading an internal build for agent-based assistants, you may find curated training paths useful: AI courses by leading AI companies.
Your membership also unlocks: