SK Telecom's new CEO puts AI infrastructure at the center of the growth plan
SK Telecom has a new boss, Jai-hun Jung, and a clear direction: build national-scale AI infrastructure and let it fuel the next phase of growth. At the SK AI Summit 2025, the message was direct-compute, data centers, and network-edge integration will anchor the company's strategy.
For executives, this isn't a branding shift. It's a new operating model for a telco: move from connectivity provider to AI infrastructure platform, delivered with sovereign-grade reliability and returns that justify heavy investment.
Why it matters for operators and enterprise buyers
Telecom operators sit on the ingredients AI needs: spectrum, fiber, edge locations, and customer data (with consent and controls). Turning that into scalable AI infrastructure is a logical next step-if the execution, security, and partnerships are tight.
SK Telecom's direction also hints at how the market will consolidate: fewer vendors, larger commitments, integrated stacks that tie GPUs, storage, networking, and developer services into one accountable SLA.
The strategy in three tracks
- Compute scale-up: Build or expand AI data centers with high-availability clusters, likely including next-gen accelerators such as NVIDIA Blackwell GPUs. Expect a mix of owned capacity and strategic leasing to manage supply, power, and demand volatility.
- Data pipelines and governance: Turn telco-grade datasets into compliant, high-value features. That means strict consent, privacy-preserving architectures, and retrieval systems that keep sensitive data local while enabling high-quality model outputs.
- Network-edge integration: Tie 5G/6G, MEC, and fiber assets to low-latency AI services. The edge becomes a placement decision: what runs in core vs. micro data centers vs. on-device, based on cost, latency, and regulation.
Execution moves to watch
- CAPEX and energy posture: Multi-year spend for GPU clusters, high-speed interconnects, and liquid cooling. Look for targets like sub-1.2 PUE and long-term renewable PPAs to stabilize energy costs and meet ESG commitments.
- Partnerships and alliances: Silicon vendors, model providers, and regional operators (e.g., cross-border alliances for 6G and AI services) to accelerate go-to-market and share risk.
- Platform and APIs: Managed model services, vector databases, and developer tooling with transparent pricing. Expect Korean-language model strength, domain-tuned options, and private deployments for sensitive workloads.
- Security hardening: Post-incident resilience with segmentation, immutable backups, and zero trust architecture. In AI data centers, compromise containment and rapid restore matter as much as throughput.
- Governance: Clear policies for model risk, content provenance, and evaluation. Enterprises will ask for red-teaming results, audit trails, and incident response playbooks.
Monetization paths SK Telecom is likely to prioritize
- AI infrastructure-as-a-service: Reserved and burst GPU capacity with QoS tiers, local data residency options, and enterprise-grade SLAs.
- Vertical solutions: Contact center AI, network automation, industrial vision, and geofenced analytics-bundled with connectivity and managed services.
- Sovereign AI projects: Government and critical-infrastructure contracts that require local compute, vetted models, and strict compliance.
- Interconnect and peering: High-speed routes between AI regions to support cross-border training and inference with predictable latency and cost.
Risks and leading indicators
- Supply constraints: Delivery timelines for GPUs, memory, and networking. Watch order lead times and allocation visibility.
- Power and permits: Capacity additions depend on grid access, renewable sourcing, and cooling approvals.
- Regulatory pressure: Data residency, model transparency, and content provenance rules that affect deployment choices.
- Utilization and ROI: The margin story lives and dies on high utilization and efficient scheduling across training and inference workloads.
- Vendor concentration: Overreliance on a single silicon or model stack can introduce cost and bargaining risk. Expect hedges and multi-vendor pilots.
What this means for your roadmap
If you buy AI at scale, expect bundled offers: compute, storage, networking, and managed services under one price. That simplifies procurement but raises switching costs. Negotiate for portability: data egress terms, model export paths, and open interfaces.
If you operate regulated workloads, push for sovereign options with verified isolation and clear incident response. Co-design governance with the provider-evaluation criteria, audit cadence, and retrain triggers should be documented up front.
Quick decision checklist for executives
- Reserve capacity early if your 2025-2026 AI plans hinge on guaranteed GPUs and latency targets.
- Ask for a capacity and energy roadmap: PUE targets, renewable mix, and backup power plans.
- Benchmark SLAs beyond uptime: inference latency, queueing guarantees, and restore objectives.
- Validate vendor lock-in exposure: model portability, vector DB formats, and API compatibility.
- Run a joint security exercise: simulated breach, data exfiltration test, and recovery drill.
- Align on TCO: include power, cooling, egress, support tiers, and retraining cycles.
Context and signals
Industry activity points in the same direction: larger AI data centers, closer ties between telecom and AI vendors, and rapid adoption of next-gen accelerators. Regional partnerships in 6G and AI services suggest cross-border plays and shared infrastructure models. Sovereign AI agendas will favor providers that can meet local compliance and still compete on performance.
Bottom line
SK Telecom is positioning itself as a national-scale AI infrastructure platform. If the company hits its execution milestones-compute availability, low-latency edge services, hardened security, and clear economics-it will become a primary supplier for enterprises that need AI with guaranteed performance and local compliance.
For buyers, the move is timely. Lock in capacity where it makes sense, guard for portability, and negotiate governance up front. The winners in this cycle will be the ones who secure supply, keep options open, and prove ROI fast.
Upskill your team: If you're building an internal capability to evaluate providers and deploy AI responsibly, explore executive-focused learning paths at Complete AI Training.
Your membership also unlocks: