Fidelity Leads $140M+ Series D in Empower Semiconductor to Boost AI Data Center Efficiency

Empower Semiconductor raised $140M+ Series D led by Fidelity to scale AI-grade energy delivery. CMOS IVRs with FinFast boost transient response and cut energy use in data centers.

Published on: Sep 23, 2025
Fidelity Leads $140M+ Series D in Empower Semiconductor to Boost AI Data Center Efficiency

Empower Semiconductor Secures $140M+ Series D to Accelerate AI-Grade Energy Delivery

September 23, 2025

Empower Semiconductor closed over $140 million in Series D financing led by Fidelity Management & Research Company. The round included Maverick Silicon, CapitalG, Atreides Management, Socratic Partners, Walden Catalyst Ventures, Knollwood, and a wholly owned subsidiary of the Abu Dhabi Investment Authority (ADIA).

Leadership framed the syndicate as a signal of strong technology, market fit, and customer traction. Maverick Silicon underscored the company's position in AI and data center infrastructure, citing relief of a critical bottleneck in modern compute.

Why this round matters

AI training and inference are straining facility electrical budgets and rack density plans. Independent analyses point to a steep climb in data center electricity use over the next few years, raising the stakes for efficient delivery and conversion across boards and packages. IEA guidance on data center electricity use.

Empower focuses on tighter, faster voltage regulation near the load, which directly affects compute throughput, thermal headroom, and total cost of ownership (TCO). The thesis: better delivery dynamics translate into higher useful performance per watt and fewer overbuilds.

Inside the tech: FinFast and IVRs

The company's FinFast technology pairs with integrated voltage regulators (IVRs) built entirely on advanced CMOS. IVRs consolidate discrete supply components into a single device, shrinking PCB area, height, and component count.

This approach enables vertical delivery with high density and clean signal behavior at the silicon edge. Because the regulators are CMOS-based, they can be co-packaged with a System-on-Chip, further tightening the electrical path and reducing energy loss.

Measured advantages vs. traditional PMICs

  • Higher transient accuracy with up to 100x faster settling times.
  • Nanosecond Dynamic Voltage Scaling (DVS) enabling up to 50% energy savings.
  • DVS speeds reported at over 1000x faster than current best-in-class, allowing near-instant voltage state changes and minimizing excess voltage.

What finance leaders should watch

  • Energy cost: Faster point-of-load regulation limits overshoot and waste, cutting electricity overhead at scale.
  • Capex deferral: Higher compute density per rack reduces the need for incremental buildouts and distribution upgrades.
  • TCO: Fewer components and smaller footprints lower BOM, assembly time, and field failure exposure.

What product and data center teams gain

  • Design simplicity: Integration removes multiple discretes, easing layout and reducing PCB area.
  • Performance headroom: Near-load regulation improves transient response, enabling tighter voltage margins for accelerators and CPUs.
  • Mechanical freedom: Reduced height and component count help meet strict form-factor and airflow constraints.

Market context and outlook

As AI models scale, delivery speed and precision at the package level become a gating factor. Investors highlighted this point, noting that Empower's approach addresses a core constraint for next-gen compute.

Expect the new capital to support production ramp, advanced packaging integrations, and deeper engagements with accelerator vendors, server OEMs, and hyperscalers. The near-term metric to watch: measurable energy savings at the rack and fleet levels without sacrificing performance.

Company snapshot

  • Headquarters: Silicon Valley
  • Focus: CMOS-based integrated voltage regulation and FinFast technology
  • Use cases: AI accelerators, CPUs, memory subsystems, and dense data center deployments

If you're building talent pipelines around AI infrastructure and systems efficiency, explore curated learning paths by role at Complete AI Training.