SK hynix Unveils AIN Family for AI, Grows HBF Ecosystem at OCP 2025

At OCP 2025, SK hynix unveiled AIN-P, B, and D-to speed AI inference with better efficiency and lower cost. Product teams can tune for performance, bandwidth, or PB-scale density.

Categorized in: AI News Product Development
Published on: Oct 27, 2025
SK hynix Unveils AIN Family for AI, Grows HBF Ecosystem at OCP 2025

SK hynix outlines AI-NAND strategy at OCP 2025: performance, bandwidth, and density for product teams

At the 2025 OCP Global Summit in San Jose (Oct 13-16), SK hynix introduced the AIN (AI-NAND) Family-three solution lanes built to meet AI inference demands: AIN P for performance, AIN B for bandwidth, and AIN D for density. The company's message is clear: as inference scales, storage must move more data, faster, with better energy use and lower cost per bit.

For product development leaders, this creates a practical path to re-architect memory and storage tiers around real AI workloads-LLMs, retrieval, vector databases, and training-to-inference handoffs-without blowing up power budgets or BOM.

The AIN Family at a glance

  • AIN P (Performance): Built to process large AI inference datasets efficiently by reducing bottlenecks between storage and compute. SK hynix is designing new NAND and controller architectures and plans to sample by the end of 2026.
  • AIN D (Density): Targets petabyte-class capacity (from today's TB-class QLC SSDs) with low power and cost characteristics. The aim is a mid-tier storage option that blends SSD-like speed with HDD-like cost efficiency for AI data at scale.
  • AIN B (Bandwidth): Uses High Bandwidth Flash (HBF) concepts-vertically stacked NAND, inspired by HBM-style stacking-to expand bandwidth. Options include co-locating with HBM to lift overall system capacity.

Why this matters for your roadmap

AI systems are bottlenecked as much by data movement as by compute. The AIN lineup suggests three levers you can pull-IOPS/latency (P), bytes per watt and per dollar (D), and sustained bandwidth near compute (B). Align the lever with the workload: token streaming and KV-cache feeding need performance and bandwidth, long-horizon datasets need density.

AIN P: Close the storage-compute gap

AIN P focuses on throughput and latency under large-scale inference. The goal is to serve models and embeddings without stalling GPUs or accelerators and to cut energy per query by reducing waste in the data path. Samples are planned by end of 2026, giving teams a window to model the gains and plan pilots.

  • What to evaluate: targeted latency/IOPS, queue depth behavior, sustained vs burst performance, and impact on node-level energy.
  • Integration planning: controller feature support, firmware hooks, telemetry, and compatibility with your existing storage stack.

AIN D: Push capacity to PB-scale

AIN D extends QLC-era economics to PB-class footprints while holding power in check. For product teams running retrieval, feature stores, logs, and long-term embeddings, this tier can compress TCO without falling back to HDD speeds.

  • What to evaluate: cost per GB, watts per TB, rebuild/repair times, and data placement policies across hot/warm/cold tiers.
  • Workload fit: bulk AI corpora, fine-tune datasets, and archives that still need decent read performance.

AIN B (HBF): Bring bandwidth closer to compute

HBF stacks multiple NAND dies vertically-similar in spirit to HBM stacks-to lift bandwidth where models need it. With larger LLMs and rising context sizes, bridging the gap between HBM and conventional SSDs helps relieve I/O stalls and boosts effective accelerator utilization.

  • Ecosystem moves: SK hynix and SanDisk signed an MOU on HBF standardization and co-hosted "HBF Night" during OCP to rally architects and engineers around interface and packaging standards.
  • Design angle: evaluate co-packaging or close-proximity placement with HBM, and define tiering rules across HBM, DRAM, HBF, and SSD.

Collaboration and ecosystem

SK hynix presented AIN during the Executive Session and convened "HBF Night" near the summit to engage industry leaders across architecture and engineering. The company emphasized close collaboration with customers and partners to accelerate NAND innovation for AI workloads.

Event context: OCP Global Summit 2025 in San Jose underscores open, standards-driven hardware design. If you're aligning to open hardware directions, keep an eye on working groups and specs coming out of OCP.

Action checklist for product teams

  • Profile your inference I/O: sequence lengths, token rates, working-set sizes, and queue depth behavior.
  • Model tiering: define data residency across HBM, DRAM, HBF-like bandwidth tiers, and SSD density tiers.
  • Set targets: cost per query, energy per query, P99 latency, and throughput per accelerator.
  • Plan pilots: align AIN P samples (by end of 2026) with a controlled deployment to validate KPIs.
  • Track standards: participate in forums that shape HBF interfaces and packaging to avoid lock-in.
  • Firmware and observability: ensure you can surface telemetry for real-time placement, prefetching, and throttling policies.

Notes and definitions

  • QLC: Stores four bits per cell. Higher bits per cell increase density at the cost of endurance and, typically, write performance.
  • HBF (High Bandwidth Flash): A NAND approach that stacks dies vertically, similar in concept to how HBM stacks DRAM, to raise bandwidth.
  • LLM (Large Language Model): Models trained on massive datasets for natural language tasks; larger contexts and parameter counts drive higher memory and storage demands.

Upskilling your team

If you're building or refactoring AI systems and need structured training for product roles, explore role-based learning paths: AI courses by job.

SK hynix stated it will keep working closely with customers and partners to play a key role in next-generation NAND storage. Given the direction set at OCP 2025, now is the time to benchmark, model, and prepare your storage stack for AI-scale data movement.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)