Brookfield's Radiant: a lower-cost path to AI compute for enterprises
Updated: 04:12 EST / January 01, 2026
Brookfield Asset Management is preparing to launch a cloud business, Radiant, that rents AI chips directly to customers. According to The Information, the model is built to cut the cost of building and operating AI data centers by controlling more of the infrastructure stack-especially power.
Radiant ties into Brookfield's $100 billion AI infrastructure program announced in November. Its Artificial Intelligence Infrastructure Fund has already committed $10 billion, with participation from industry partners including Nvidia and the Kuwait Investment Authority. Early capacity is coming from new data center builds in France, Qatar, and Sweden, where Radiant reportedly has first rights to use. Any excess will be leased to third-party cloud operators.
Why managers should care
AI demand is outpacing supply, and GPU access remains the bottleneck. If Radiant offers reliable capacity at lower prices, it changes how you plan budgets, timelines, and vendor strategy for 2026-2028 initiatives.
Brookfield's edge is its global energy footprint. If it can secure cheaper, more predictable power and integrate it tightly with data center operations, total cost of compute drops. That puts pressure on AWS and Microsoft Azure to improve energy logistics and pricing for GPU instances.
What Radiant likely offers (and what it doesn't)
- Direct access to AI accelerators with high-speed interconnects for training and inference at scale.
- Priority access to new capacity from Brookfield-backed facilities, with potential discounts for longer commitments.
- Stronger linkage between power sourcing and compute pricing, which could reduce volatility.
- Fewer bundled managed services than hyperscalers. Expect more "infrastructure-first," with you or partners owning more of the software stack and MLOps tooling.
Competitive implications for AWS and Azure
Hyperscalers still own the developer experience, managed services, and ecosystem depth. But if Radiant undercuts on cost for large, steady GPU blocks, expect more reserved pricing options, energy-backed offers, and expanded colocation-style models from the incumbents.
How to decide: a simple framework
- Large, sustained training runs: Favor capacity blocks with firm terms and energy-backed pricing (Radiant may fit). Model savings at 70-90% utilization.
- Spiky, experiment-heavy workloads: Hyperscalers can be easier due to elasticity and managed services. Use spot or flexible commitments.
- Hybrid approach: Train on the lowest-cost, energy-optimized capacity; deploy inference close to users on your existing cloud footprint.
Procurement and risk: questions to ask Radiant (and any GPU provider)
- Capacity and roadmap: What accelerators are available now and next? What are the lead times and allocation rules?
- SLA and performance: Uptime guarantees for GPUs and interconnects; network topology; job preemption policies.
- Data and compliance: Residency options (EU, Middle East), ISO/SOC reports, GDPR controls, customer-managed keys, audits.
- Security model: Bare-metal isolation, multi-tenant boundaries, firmware patch cadence, supply chain assurance.
- Energy and sustainability: Source mix, PPAs, carbon intensity reporting, demand-response participation, price pass-through terms.
- Commercials: On-demand vs. reserved pricing, prepay discounts, egress fees, termination rights, expansion options, and migration support.
TCO checks your CFO will ask for
- All-in cost per trained model: GPU hour cost × utilization + storage + networking + ops overhead.
- Contract structure: Compare 12-36 month blocks vs. on-demand. Model savings from predictability vs. the cost of underutilization.
- Energy linkage: If pricing reflects cheaper power, how durable is that advantage? What happens if power markets shift?
- Team costs: If you get less platform tooling, factor extra engineering and support into your comparison with hyperscalers.
Regulatory and geographic considerations
With capacity coming online in France, Qatar, and Sweden, you'll have options for data locality and latency. Validate certifications, cross-border data flows, and any sector-specific obligations before committing workloads.
Action plan for 1H 2026
- Issue an RFI/RFP to Radiant and two incumbent cloud providers with identical workload profiles and SLA targets.
- Run a 60-90 day pilot on a priority use case (e.g., model training requiring thousands of GPU hours). Track cost, queue times, and failure handling.
- Negotiate capacity "escalators" that secure more GPUs at the same rate if you hit usage milestones.
- Split training and inference across providers to reduce vendor concentration risk and increase negotiation leverage.
- Align finance and engineering on a utilization threshold that triggers reserved commitments vs. on-demand.
The strategic takeaway
If Brookfield executes, GPU supply becomes less about luck and more about contracts tied to power and physical capacity. That's good for planning. You get clearer pricing, stronger commitments, and another serious vendor to keep hyperscalers honest.
Keep a close eye on how Radiant packages energy, capacity, and SLAs. The provider that turns power and logistics into consistent, lower compute costs will win a big share of enterprise AI budgets.
For background on Brookfield's broader business, see Brookfield Asset Management. If you're building internal skills while you evaluate infrastructure options, explore role-based programs at Complete AI Training.
Your membership also unlocks: