Nvidia commits 1GW of Vera Rubin chips and investment to OpenAI rival Thinking Machines Labs

Nvidia will supply Thinking Machines Labs 1 GW of Vera Rubin chips and will invest, with rollout early next year. Finance leaders should watch capex, optics, and supplier risk.

Categorized in: AI News Finance
Published on: Mar 11, 2026
Nvidia commits 1GW of Vera Rubin chips and investment to OpenAI rival Thinking Machines Labs

Nvidia commits 1 GW of AI chips to Thinking Machines Labs - plus a strategic investment. What finance leaders should watch

Nvidia will supply Thinking Machines Labs with upwards of 1 gigawatt worth of its next-gen Vera Rubin chips, with deployment slated for early next year. The two companies will also co-design training and serving systems optimized for Nvidia architectures, aiming to expand enterprise and research access to frontier and open models.

Alongside the supply deal, Nvidia is making a significant - but undisclosed - equity investment in Thinking Machines to support long-term growth. This pairing blends capital, compute, and go-to-market alignment in a single move.

Who's behind Thinking Machines

Thinking Machines CEO Mira Murati founded the company in 2025 after serving as OpenAI's CTO in 2024. She briefly stepped in as OpenAI's CEO during the 2023 board turmoil before Sam Altman was reinstated. Her track record signals serious technical and operational intent.

Why "1 gigawatt" matters for finance

A commitment at this scale isn't just about GPUs. It implies major spend on power, cooling, networking optics, facilities, and long-term capacity planning. Expect multi-year capex schedules, structured supplier agreements, and potential power purchase arrangements to control opex volatility.

For Nvidia, pre-selling capacity de-risks utilization and strengthens pricing power. For Thinking Machines, securing supply early can compress model training timelines and pull forward product milestones - but it concentrates supplier risk.

The investment angle: vendor financing or smart alignment?

This deal will draw questions about circular investing - where chipmakers invest into AI startups that then buy more chips. The concern: artificial demand signals and feedback loops that inflate infrastructure orders. Similar patterns exist in vendor financing, though Nvidia and peers push back on the premise.

For investors, the key is cash quality: watch how much of Nvidia's data center growth is tied to equity-linked customers versus independent buyers. For CFOs at AI startups, expect more chip suppliers to pair capacity with capital, board observer rights, and roadmap lock-in.

Nvidia's broader deal momentum

On March 2, Nvidia announced agreements with Coherent and Lumentum to advance optics - a critical bottleneck for data center scaling. In February, it entered a multiyear, multi-generational partnership with Meta. OpenAI also said Nvidia would invest $30 billion as part of its $110 billion round.

This is a consistent playbook: secure demand with strategic partners, back it with cash, then reinforce the supply chain from optics to systems.

Earnings context to frame the risk/reward

Nvidia's most recent Q3 came in hot: EPS of $1.30 on $57.01 billion in revenue, with data center sales at $51.2 billion versus $49.3 billion expected. Q4 revenue guidance landed at $65 billion, plus or minus 2%, ahead of the Street's $62 billion.

If guidance holds and supply chain expansions land on time, the demand story remains intact. Any cracks would likely show up first in lead times, optics constraints, or deferred revenue trends. For a baseline, track updates on Nvidia Investor Relations.

What finance leaders should do next

  • Model total cost of AI ownership with realistic power and networking assumptions. Stress test opex sensitivity to energy prices and cooling efficiency.
  • Scrutinize supplier concentration. Negotiate commitments that balance capacity guarantees with escape hatches if pricing or performance shifts.
  • Map depreciation schedules to model cycles. Shorten useful life where refresh cadence is accelerating to avoid stranded assets.
  • Evaluate vendor-linked capital offers. Equity or warrants tied to chip purchases can be attractive, but watch governance terms and future pricing constraints.
  • For investors: disaggregate demand. Separate revenue tied to equity-linked customers from independent buyers to judge durability.

Strategy takeaway

The Nvidia-Thinking Machines partnership fuses compute access, capital, and architecture alignment into one package. If you manage budgets or allocate capital, treat these mega-deals as signals: capacity is king, optics is the bottleneck, and financing is becoming part of the product.

If you're building an internal brief for your leadership team, this AI Learning Path for CFOs can help translate AI infrastructure shifts into capex policy, forecasting, and risk controls.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)