South Korea to Allocate 10,000 Nvidia GPUs to SMEs, Startups, and AI Projects Starting in February

South Korea will start allocating 10,000 Nvidia GPUs to SMEs, startups, academics, and public AI projects in February. Apply by Jan 28; up to 256 H200 or 128 B200 per project.

Categorized in: AI News Government
Published on: Dec 19, 2025
South Korea to Allocate 10,000 Nvidia GPUs to SMEs, Startups, and AI Projects Starting in February

South Korea to allocate 10,000 Nvidia GPUs to SMEs, startups, and public AI projects starting February

South Korea will begin allocating about 10,000 Nvidia GPUs to small and mid-sized enterprises, startups, academic teams, and state AI projects from February. The move follows a ministers' meeting chaired by Science Minister Bae Kyung-hoon and is part of a broader push to build a strong domestic AI ecosystem and national compute capacity.

The government recently invested 1.4 trillion won (about US$947.2 million) to procure the first batch of GPUs, with a plan to reach 50,000 units by 2030. The hardware will be configured as a large-scale cluster to support high-speed training and inference.

Key details at a glance

  • Application window: online submissions are open now through January 28.
  • Per-project support: up to 256 H200 GPUs or 128 B200 GPUs.
  • National models: around 6,000 B200 GPUs, arriving later, are earmarked for homegrown foundation model development.
  • Long-term build-out: the state will use about 50,000 GPUs to create a sovereign AI platform by 2030.
  • Industry context: Nvidia's CEO announced a plan to deploy 260,000 GPUs to South Korea with government and major firms. Under the plan, Samsung Electronics, SK Group, and Hyundai Motor Group are each set to receive around 50,000 units, and Naver 60,000.

What this means for government teams

This cluster gives public agencies and their partners the compute headroom to tackle high-impact work: multilingual public service models, disaster response simulation, health analytics, smart mobility, and more. It also lowers entry costs for pilots that would otherwise be out of reach.

Expect shared governance-priority queues, quotas, data retention rules, and regular reporting. Before moving sensitive data, confirm compliance with privacy law and security baselines.

How to submit a strong application by January 28

  • State the public outcome and beneficiary: the policy goal, program impact, or service improvement.
  • Detail datasets: provenance, consent, cleaning plan, and security controls.
  • Match the GPU ask to the workload: model size, batch sizes, memory needs, expected training hours.
  • Outline methods: training vs. fine-tuning, inference plan, evaluation criteria, and baselines.
  • Include delivery milestones: 30/60/90-day checkpoints and what you'll ship at each stage.
  • Show team capacity: roles, prior work, and any external partners.
  • Address risks: bias, safety testing, red-teaming, and content provenance.
  • Plan for sustainability: energy use, scheduling windows, and checkpointing to reduce reruns.
  • Add collaboration letters: cross-agency or regional partnerships get extra credit.

Choosing between H200 and B200

The program caps are clear: up to 256 H200 or 128 B200 per project. As a rule of thumb, H200 suits large fine-tuning and high-throughput inference, while B200 fits ambitious pretraining or very large model upgrades.

Right-size requests by profiling smaller runs first, estimating memory per GPU, and mapping steps per day. Avoid over-asking; idle time will count against throughput for everyone.

Operational considerations before you start

  • Data ingress/egress: budget time for secure transfer and validation. Minimize unnecessary movement.
  • Packaging: containerize jobs and pin versions for CUDA, compilers, and libraries.
  • Scheduling: be ready for batch systems; design jobs to preempt gracefully and resume from checkpoints.
  • Observability: log GPU hours, failure rates, and model metrics; report them in your monthly updates.
  • Security: isolate secrets, use KMS for keys, and restrict outbound access during training.
  • Governance: document datasets, model cards, and evaluation reports before release.

Context for the broader ecosystem

The allocation follows Nvidia's late-October statement about a large GPU build-out in Korea with government and major firms. Nvidia has also introduced open AI models and tools for areas like autonomous driving and robotics, signaling continued momentum across research and industry.

Timeline and next steps

  • Now-January 28: Submit applications through the government portal.
  • February: Cluster access begins for approved projects.
  • Later deliveries: About 6,000 B200 units will support national foundation model work.
  • Through 2030: Government platform scales to around 50,000 GPUs.

Watch ministry notices for final eligibility rules, FAQs, and onboarding guidance.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide