GSMA Launches Global AI Telco Troubleshooting Challenge to Cut Outages with Explainable, Edge-Ready LLMs

GSMA, ETSI, IEEE GenAINet, ITU and TM Forum launch an AI telco challenge for faster, more accurate RCA with LLMs. Focus: reliability, cost, edge, and explainability.

Categorized in: AI News Management
Published on: Nov 27, 2025
GSMA Launches Global AI Telco Troubleshooting Challenge to Cut Outages with Explainable, Edge-Ready LLMs

GSMA Launches Global AI Telco Troubleshooting Challenge: What Leaders Need to Know

Network faults drain budgets, erode customer trust, and slow growth. The GSMA's new AI Telco Troubleshooting Challenge, launched with ETSI, IEEE GenAINet, ITU and TM Forum, targets the core issue: faster, more accurate root cause analysis (RCA) using large language models.

If you lead operations, strategy, or transformation, this is worth your attention. The challenge is built to surface practical AI models you can deploy across live networks, with a clear focus on results, efficiency, and explainability.

What the challenge covers

The competition invites submissions across multiple categories that map directly to executive priorities: reliability, cost control, and deployment readiness. It addresses both cloud and edge environments, and puts an emphasis on clear reasoning-not just predictions.

  • Generalisation to New Faults: Assess LLMs that diagnose previously unseen network issues and provide RCA that holds up in production.
  • Small Models at the Edge: Evaluate compact, efficient models that can run on edge infrastructure where latency and cost matter.
  • Explainability & Reasoning: Prioritise systems that show their work so engineers and auditors can trust decisions.
  • Security & Edge-Cloud Deployments: Additional categories focus on secure architectures for distributed AI.
  • Enablement for Developers: Accelerators that let teams build and ship AI services faster.

Why it matters for executives

  • Fewer outages, faster recovery: Better RCA reduces mean time to detect and repair, lifts SLA performance, and cuts penalties.
  • Lower OPEX: Automating detection and diagnosis saves engineer hours and improves first-time fix rates.
  • Edge economics: Small, efficient models enable local inference where bandwidth, latency, and privacy constraints apply.
  • Auditability: Explainable models simplify sign-off with operations, compliance, and regulators.
  • Vendor clarity: Head-to-head benchmarking lowers buying risk and speeds up internal approvals.

Judging criteria

Submissions will be evaluated for accuracy, efficiency, reasoning capability, and security considerations. This is not a demo-theory contest-models must deliver measurable outcomes and robust reasoning that teams can validate.

Who's involved

The challenge is led by ETSI, GSMA, IEEE GenAINet, ITU and TM Forum, with headline support from Huawei, InterDigital, NextGCloud, RelationalAI, xFlowResearch, and technical advisors from AT&T. It builds on curated datasets like TeleLogs and benchmarking work from the GSMA Open-Telco LLM Benchmarks community, which tracks model performance on telco-specific tasks.

Industry leaders are aligned on the signal here: generalisation to unseen faults, explainability, and edge efficiency are the make-or-break factors for AI-native network operations. Progress on these fronts translates directly into uptime and margin.

Notable perspectives from partners

ETSI highlights the importance of small language models that run effectively at the edge-lower cost, easier deployment, and fewer barriers across diverse environments. IEEE GenAINet underscores generalisation, interpretability, and edge-efficient AI as the path to autonomous and adaptive networks.

ITU points to the value of global AI challenges in connecting teams with compute, datasets, and mentorship to move ideas to impact. TM Forum frames this as a step toward production-grade network autonomation, guided by its AI-Native Blueprint.

GSMA stresses that RCA is a major pain point with clear ROI upside. AT&T adds momentum with results from a 4B-parameter small language model that topped evaluations on the GSMA Open-Telco LLM Benchmarks TeleLogs RCA task-outperforming larger frontier models in that test.

Key dates

  • Submissions open: 28 November
  • Submissions close: 1 February 2026
  • Winners announced: Prize-giving at MWC26 Barcelona

What to do now

  • Nominate a team: Combine network operations, data, and security leads with an internal AI group or an external partner.
  • Prep data access: Identify relevant logs (e.g., alarm, KPI, topology), define privacy constraints, and set a safe sandbox for evaluation.
  • Set the scorecard: Standardise success metrics: MTTR reduction, RCA accuracy, false-positive rate, inference cost, and explainability quality.
  • Pilot at the edge: Prioritise lightweight models for high-traffic sites where latency and bandwidth savings matter most.
  • Plan for security: Map threat models for edge-cloud, API access, and data residency before any scale-up.

Strategic takeaway

This challenge gives you a direct way to test what actually works on your networks-before you commit to large rollouts. If you're under pressure to improve uptime and reduce OPEX without adding headcount, this is a practical lever.

Use the results to guide vendor selection, refine your edge AI strategy, and build internal confidence with transparent, auditable AI that your engineers can trust.

Learn more and related resources

Upskill your team

If you need focused learning paths for leaders, engineers, and data teams to execute on this, explore curated options here: AI courses by job role.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide