Benchmark: Bitdeer's in-house AI data centers aim to lift margins and speed up revenue
Benchmark is bullish on Bitdeer (NASDAQ: BTDR). The firm expects Bitdeer's shift to fully self-developed AI data centers to improve profit margins and help the company recognize revenue faster.
Two buildouts are in focus: a 570 MW campus in Ohio targeting operations by the end of 2026 (ahead of earlier timelines) and 200 MW of AI compute planned in Norway. Benchmark set a target price of $38, valuing Bitdeer at 6x projected 2026 revenue.
The buildout at a glance
- Ohio: 570 MW campus, targeting early operation by end of 2026.
- Norway: plan to deploy 200 MW dedicated to AI workloads.
- Valuation view: price target of $38, based on 6x 2026 revenue.
Why in-house data centers matter for technical teams
Owning the stack reduces third-party markups and gives tighter control over design, power density, cooling, and network fabric. That control can shorten deployment cycles, standardize hardware/firmware baselines, and cut friction in capacity expansion.
For AI workloads, consistency at the rack and cluster level pays off: predictable thermals, cleaner driver stacks, and fewer surprises when scaling jobs across thousands of accelerators.
Practical takeaways for IT and development
- Capacity planning: align training vs. inference pools early. Reserve headroom for checkpoint spikes, data ingest, and model evals so jobs don't starve during peak windows.
- Networking: budget for high-throughput east-west traffic and strict QoS. Congestion control and job-aware scheduling matter more as clusters grow.
- Storage: optimize for large sequential writes (checkpoints) and many parallel reads (validation, fine-tuning). Blend object storage with fast local/parallel tiers.
- Orchestration: enforce quotas and isolation across teams. Use clear SLAs for GPU hours, preemption rules, and queue priority to avoid internal gridlock.
- Observability: track utilization end to end-GPU duty cycle, interconnect saturation, I/O hot spots, and energy metrics. Tie cost back to projects and models.
- Compliance and data locality: US (Ohio) vs. Europe (Norway) gives options for regional data rules and latency-sensitive workloads.
- Procurement reality: long lead times for transformers, switchgear, and accelerators can bottleneck delivery. Lock forecasts early.
What to watch next
- Grid interconnects and permitting: timelines depend on utility upgrades and approvals.
- Thermal strategy: air vs. liquid cooling choices will shape achievable power density and operating costs.
- GPU and networking supply: availability will influence how quickly capacity turns into billable AI compute.
- Energy mix in Norway: access to low-carbon power can improve cost and sustainability profiles for EU workloads. See background from the IEA on Norway's electricity system here.
- Commercial model: how much capacity is sold as dedicated vs. shared affects utilization and revenue timing.
If you follow BTDR, you can track the ticker details on Nasdaq.
Level up your team's AI infra skills
Building and operating AI clusters is a different sport than traditional web workloads. For hands-on upskilling in MLOps, orchestration, and AI tooling, see our curated programs at Complete AI Training.
Disclaimer: This content is for informational purposes only and is not investment advice.
Your membership also unlocks: