AI fuels decade-long memory upcycle for Samsung and SK Hynix, ClearBridge says

AI demand is turning memory into a growth engine, and Samsung and SK Hynix look set to benefit. HBM-fueled orders suggest a longer cycle, though yields and ramps bear watching.

Categorized in: AI News Management
Published on: Jan 17, 2026
AI fuels decade-long memory upcycle for Samsung and SK Hynix, ClearBridge says

AI Demand Puts Samsung and SK Hynix on a Longer Upswing

AI is turning memory from a cyclical afterthought into a core growth engine. The scale of AI workloads keeps surprising the market, and that gap matters for planning, budgets, and supplier decisions.

We're still early. Some US tech leaders suggest we're only in year two of a potential decade-long upgrade cycle. That framing is showing up in positioning and results: the ClearBridge SMASh Series EM Fund, which has outperformed 97% of peers over the past year, has held sizable stakes in Samsung Electronics and SK Hynix since 2015.

SK Hynix sits at the center of the AI buildout as the leading supplier of high-bandwidth memory used in Nvidia's accelerators. For context on why this matters, here's a primer on high-bandwidth memory (HBM) and its role in AI compute.

Samsung's momentum is also visible. Profit in the three months through December more than tripled to a record level as AI server demand drove memory pricing higher. For ongoing updates, check Samsung's investor relations portal: Samsung IR.

The common thread: AI is increasing memory intensity across data centers and, progressively, across client devices. Despite strong gains through 2025, many Asian AI-linked semiconductor names still trade at more reasonable forward earnings multiples than US peers.

Risk management still applies. Holding both Samsung and SK Hynix can spread execution risk if one stumbles on yield, node transitions, or capacity ramps.

What this means for managers

  • Treat memory as strategic, not commodity. AI performance is frequently memory-bound; capacity and bandwidth drive outcomes.
  • Plan for a multi-year cycle. Budget with a 5-10 year horizon for servers, memory, networking, and facilities.
  • Dual-source where possible. Secure HBM supply from multiple vendors and negotiate volume tiers and delivery windows.
  • Watch lead times and capacity adds. HBM3E/next-gen ramps, yield trends, and packaging constraints can hit timelines.
  • Track valuation spreads. Compare forward P/E and EPS revisions for Asian memory leaders vs. US peers to guide allocation.
  • Connect AI goals to infrastructure limits. Prioritize workloads that benefit most from higher memory bandwidth and density.

Metrics to watch next quarter

  • HBM mix, pricing, and utilization at SK Hynix and Samsung
  • GPU roadmaps and orders from Nvidia/AMD and the knock-on effect for memory
  • Server buildouts at hyperscalers and enterprise AI pilots moving to production
  • Capex guidance, supply-demand balance, and any signs of over-ordering

Upskill your team

If you're setting AI budgets or vendor strategy, getting your team fluent in the tech and the business cases helps. See role-based learning paths here: Complete AI Training - Courses by Job.

Bottom line: Memory is becoming the heartbeat of AI infrastructure. Samsung and SK Hynix are positioned for a longer, steadier cycle-one that rewards leaders who secure supply, manage risk, and invest with a multi-year view.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide