GIBO Unveils 30MW, 14,000-GPU AI Data Center to Accelerate Malaysia's Emergence as an Asia-Pacific AI Hub

GIBO is building a 30MW, 14,000-GPU AI data center in Malaysia for LLM training and large-scale inference. It's a first step to 200MW and an interconnected national network.

Categorized in: AI News IT and Development
Published on: Dec 12, 2025
GIBO Unveils 30MW, 14,000-GPU AI Data Center to Accelerate Malaysia's Emergence as an Asia-Pacific AI Hub

GIBO to Build 30MW, 14,000-GPU AI Data Center in Malaysia

GIBO Holdings Ltd. (NASDAQ: GIBO) announced the first phase of a high-performance AI data center in Malaysia: a 30MW site paired with a 14,000-GPU cluster. The facility is geared for LLM training, small and mid-sized model development, and large-scale inference across commercial and public workloads.

The goal is clear: give engineering teams a dependable, high-density environment for deep learning, multimodal systems, simulation, and next-gen AI applications-without the usual constraints around capacity and scale.

What's in the first phase

  • 30MW initial deployment with a 14,000-GPU supercomputing cluster.
  • Support for end-to-end model lifecycles: training (including trillion-parameter scale), fine-tuning, and inference.
  • Targeted industries include mobility, advanced manufacturing, fintech, healthcare, cybersecurity, agriculture, creative media, robotics, and sustainability tech.

Roadmap: From 30MW to 200MW and a national AI network

This is step one in a larger plan: scale from 30MW to a 100MW multi-zone AI campus, then to a 200MW regional flagship. Beyond a single site, GIBO plans an interconnected network across Sarawak, Johor, Penang, and Greater Kuala Lumpur.

The network is intended to act as an "ASEAN-to-North Asia AI Compute Highway," linking Malaysia with Singapore, Indonesia, Thailand, Japan, Korea, and Greater China.

Sustainability and cooling strategy

The facility is being engineered for tropical conditions with liquid or immersion cooling to improve efficiency and resilience. For teams planning data center integrations or colocation, this points to higher rack densities and tighter thermal envelopes than air-cooled norms.

If you're assessing thermal and facility design trade-offs, ASHRAE's guidance on liquid and immersion cooling is a solid reference point. See ASHRAE Datacom resources.

What this means for architects, MLOps, and platform teams

  • Networking: Expect low-latency, high-bandwidth fabrics for distributed training. Plan for model/tensor parallelism, sharding, and collective ops at scale.
  • Scheduling and tenancy: Multi-team, multi-workload environments need strong quota policies, fair scheduling, and priority preemption. Coordinate across Slurm/Kubernetes-style schedulers, job queues, and artifact registries.
  • Storage and data pipelines: Training-ready pipelines must support fast ingest, high-throughput checkpoints, and reliable lineage. Build for multi-petabyte datasets, snapshotting, and tiered storage.
  • Security and governance: Cross-region deployments require strict data locality, tenancy isolation, encryption in transit/at rest, and auditable access controls-especially for healthcare and financial data.
  • Observability: Treat GPU utilization, interconnect health, thermal headroom, and energy metrics as first-class signals. Integrate them into your SLOs, not just cluster dashboards.

Why Malaysia is a smart fit

Malaysia combines a strategic location, a maturing digital ecosystem, pro-AI national policies, and a growing technology talent pool. The project is expected to attract global partners and AI-focused ventures, creating more options for teams that need dependable regional GPU capacity.

For policy and ecosystem context, review the national digital initiatives at MyDIGITAL.

How to plug this into your roadmap

  • Capacity planning: Treat this as an additional option for training and large-scale inference in Southeast Asia, with potential cost and latency advantages for regional users.
  • Workload placement: Place pretraining close to data sources; use the networked sites for fine-tuning and inference to balance egress, latency, and compliance.
  • Resilience: As the campus scales to multi-zone, design active-active or active-standby strategies with checkpoint portability and reproducible builds.
  • Sector pipelines: Expect domain-specific digital pipelines (e.g., healthcare, manufacturing). Align your data models, schemas, and validation with those lanes early.

Upskill your team for LLMOps and AI platform work

If you're building skills for AI infrastructure, orchestration, and model operations, explore role-based programs here: AI courses by job.

Forward-looking note

Plans, timelines, and outcomes described here are forward-looking and based on current expectations. Actual results may differ due to risks, uncertainties, and changes in market or execution conditions. The company has no obligation to update these statements unless required by law.

More information

For updates from GIBO, visit https://www.globalibo.com/gibo-click/.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide