MWC 2026 signals shift to agentic AI as core operating model for telecom networks

Telecom carriers are moving from AI chatbots to autonomous network operations, with MWC 2026 announcements centering on agentic AI that executes tasks-not just advises. Nokia, AWS, NTT Docomo, and NEC all showed working demos.

Categorized in: AI News Operations
Published on: Apr 03, 2026
MWC 2026 signals shift to agentic AI as core operating model for telecom networks

Telcos Shift From AI Assistants to Autonomous Network Operations

Communications service providers are moving beyond chatbots and decision-support tools. At Mobile World Congress 2026 in Barcelona, the industry's major announcements pointed to agentic AI as the next operating model for networks - not as a conversational layer, but as an execution engine that runs network operations with human oversight.

This marks a fundamental change in how telcos approach automation. Rather than replacing engineers, agentic AI formalizes closed-loop operations patterns: alarm correlation, ticket enrichment, and policy tuning for differentiated services. Nokia and Amazon Web Services demonstrated this with 5G-Advanced network slicing; NTT Docomo and NEC showed automated 5G core construction.

Bounded autonomy replaces brittle automation

Earlier automation programs failed because they were rigid, isolated, and overly deterministic. Agentic approaches can coordinate across tools and handle edge cases - but only if vendors keep actions within strict limits and base decisions on accurate network data.

The practical wins will be narrow and reversible. An agent can enrich a ticket with context or suggest a policy adjustment, but humans retain control over execution. This is where AI Agents & Automation differs from past initiatives: the agent works within guardrails tied to measurable outcomes.

The primary risk is what researchers call "agent-washing" - autonomy narratives without production-grade controls, proofs, and metrics. CSPs and vendors that industrialize bounded autonomy with explicit governance and measurable KPIs should see durable advantage. Those that don't will see adoption stall.

AI factories emerge as a defensible business

Beyond network operations, CSPs are packaging AI infrastructure as a product. The thesis is straightforward: enterprises with sensitive workloads cannot or will not run them exclusively on hyperscalers. Telcos can offer sovereign, policy-governed AI capacity with data residency guarantees.

What distinguishes this from earlier "edge compute" hype is explicit commercial structure: GPU capacity, platform software, security governance, and managed services positioned closer to cloud economics than custom professional services.

Telenor and Red Hat demonstrated this with multi-tenant platforms that emphasize locality, compliance, and operational control. Success requires capacity planning, high GPU utilization, platform reliability, and ecosystem integration. The most likely winners will focus on regulated verticals and low-latency regional interconnect, then partner aggressively for elements they cannot differentiate.

Without proof of utilization, reliability, and repeatability, "AI factories" risk replaying the failed edge-compute cycles of the past.

Networks become programmable control surfaces

Telco roles extend beyond compute and transport. The network itself is being framed as a programmable instrument for AI applications. Nokia's Network as Code integration with Google Cloud's agentic AI stack positions network capabilities - quality, slicing, prioritization, security - as software surfaces that agents can request and tune programmatically.

This creates two opportunities. First, APIs can monetize differentiated network behavior for latency-sensitive or safety-critical applications. Second, agents can incorporate network state and policy into automated decisions in real time.

The challenge is commercialization. Network APIs fail when they are difficult to use, inconsistently implemented, or priced without clear linkage to business outcomes. Viable strategies narrow focus to outcome-based APIs, establish consistent governance, and ensure operator-to-operator portability. Standardization efforts like CAMARA and the GSMA Open Gateway will determine whether this scales.

Data center interconnection becomes strategic

As AI workloads sprawl across clusters and regions, the bottleneck shifts from compute to networking. Data center bandwidth, latency variance, congestion control, and power efficiency now determine whether distributed infrastructure behaves like a single compute domain.

Hyperscalers will continue to build their own dark fiber and custom fabrics where scale justifies the investment. A large middle market - enterprises, sovereign clouds, and regional AI service providers - needs AI-grade interconnect without hyperscaler capex.

Telcos can win by productizing deterministic optical underlays, AI-aware routing, and managed dark-fiber operations for this non-hyperscaler market. The shift is from "backbone plumbing" to platform capability with measurable SLAs and automated operations.

Device orchestration becomes operational control

SGP.32, an IoT eSIM standard, emerged as critical infrastructure for distributed AI systems. Rather than a connectivity convenience, eSIM is becoming an operational control plane: standardized provisioning, profile management as code, and resilience in multi-operator deployments.

As enterprises push AI processing closer to where data is generated - sites, vehicles, retail operations - connectivity and device life cycle become intertwined with placement decisions. A telco's role is bundling SGP.32-driven onboarding with secure connectivity and regional edge compute options, governed by clear policies and predictable energy economics.

This also addresses a real constraint: moving inference outward reduces backhaul intensity and latency, but it creates new operational burdens. Powering, cooling, and managing many smaller AI footprints with consistent security requires repeatable deployment playbooks.

Operations teams must focus on execution discipline

For operations professionals, the takeaway is clear: agentic AI and AI infrastructure are no longer experimental. They are reshaping how networks run and how telcos compete.

Success depends on execution, not marketing. AI Learning Path for Operations Managers covers the operational models that will separate winners from laggards. CSPs that industrialize bounded autonomy with explicit governance, prove measurable KPIs, and operationalize distributed infrastructure at scale will build durable advantage. Those that don't will find themselves repeating cycles of failed automation and edge-compute initiatives.

The competitive environment will remain unforgiving. Hyperscalers dominate generic GPU supply and extreme-scale interconnect. Telcos must win where they are structurally advantaged: regulated industries, data sovereignty, deterministic performance, and operational trust. That requires narrow, provable wins scaled through partnerships - not broad ecosystem narratives.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)