Jensen Huang says "trillions" in AI infrastructure are coming. Here's what finance should do about it
Nvidia CEO Jensen Huang sat down with BlackRock's Larry Fink at the World Economic Forum in Davos and didn't mince words: the AI surge has kicked off the largest infrastructure buildout in history. Despite hundreds of billions already spent, he expects "trillions of dollars of infrastructure" still ahead.
He added that 2025 was among the biggest years ever for venture funding, with most checks flowing to "AI-native" companies across healthcare, robotics, financial services, and other major industries. The takeaway: models are mature enough, and the application layer is where economic value concentrates.
Why this matters for capital allocation
AI is no longer a software-only story. It's compute, memory, networking, data centers, and power-at global scale. That means multi-year capex cycles, fresh debt issuance, and new equity raises across the stack.
- Hardware: GPUs/accelerators, high-bandwidth memory, optical and Ethernet fabrics.
- Data center: new builds, retrofits, cooling, real estate, interconnects.
- Power: generation, grid upgrades, long-dated PPAs, and storage.
- Software: model serving, inference optimization, security, and the application layer where pricing power can persist.
Energy is the bottleneck
Huang underscored Europe's need to invest in energy supply to power future AI systems. Expect more grid tie-ups, onsite generation, and long-term contracts as operators chase reliable, low-cost electricity.
For underwriting and asset allocation, the key variables are siting, permitting timelines, interconnection queues, and power price volatility. Keep an eye on policy shifts as governments court data center investment. The IEA's electricity outlook is a useful reference for load growth and regional stress points.
Where value accrues: the application layer
Huang's message was clear: models enable it, but the economic payoff shows up in applications. In financial services, that's risk modeling, fraud, underwriting, research automation, and client personalization-use cases with measurable ROI.
- Infra margins compress over time; scale helps, but competition is relentless.
- Applications can protect pricing through domain data, workflow integration, and compliance footprints.
- Distribution and data moats matter more than model size alone.
VC and private markets: what the flow tells you
With 2025 one of the largest VC years on record and most dollars targeting AI-native firms, expect a rich pipeline for growth equity, secondaries, and later-stage rounds. Corporate venture arms are active, and strategic partnerships are accelerating go-to-market.
For LPs and allocators, this points to shorter time-to-revenue for vertical apps, more infrastructure rollups, and a busy M&A tape as incumbents buy distribution or specific capabilities.
Signals to track
- Hyperscaler AI capex guidance and mix (training vs. inference).
- GPU and memory lead times, networking backlogs, and pricing trends.
- Data center absorption, power PPAs signed, and interconnection wait times.
- Inference unit economics: cost per token/query and model efficiency gains.
- Regulatory movement on data, privacy, and model accountability.
Portfolio implications
Diversify across the stack but prioritize durable cash flows and clear demand visibility. Infra and energy can offer long-duration contracts; software captures ongoing value if it sits deep in workflows and reduces unit costs.
- Picks-and-shovels: components with tight supply or high switching costs.
- Energy and grid: utilities with growth capex, IPPs with credible build plans, and storage players.
- Applications: vertical AI with strong data rights and compliance baked in.
- Financial adopters: institutions showing real productivity gains, not just pilots.
Risk set: model commoditization, supply constraints, power scarcity, permitting delays, and rate sensitivity for capex-heavy names. Keep duration in check and stress-test funding paths.
Next steps for finance teams
- Build a working thesis map across compute, memory, networking, data centers, and power.
- Update screening to capture AI-driven revenue mix shifts and capex payback timelines.
- Underwrite power as a first-class input in data center and AI exposure models.
- Model inference cost curves and their impact on pricing and margins for software holdings.
- Pilot AI inside your own workflows to pressure-test vendor claims and quantify ROI.
If you're evaluating practical tools for desks and ops, here's a curated list of AI tools for finance that can help with research, modeling, and automation.
Bottom line: the capex wave is real, energy is the choke point, and the application layer is where margins stick. Position ahead of the spend-and measure everything against unit economics, not the hype cycle.
Your membership also unlocks: