National AI: Stop Auditions. Start Acceleration.
South Korea's "National AI" process is stuck in slow motion. Naver Cloud was disqualified for partial use of open-source, NCSoft fell short on scoring, and the government has reopened applications. The selection alone takes over two years. That's time the market won't give us.
Meanwhile, global players are moving with scale and speed. Microsoft and OpenAI are tied to a project reported at roughly $100 billion, and China is investing over 100 trillion won annually through public-private partnerships. We're preparing to back two finalists with infrastructure support worth just over 200 billion won. The gap is obvious.
What went wrong
- Originality rules penalized practical use of open-source. In AI, reuse is a strength, not a flaw.
- An audition-style contest assumes certainty. AI rewards iteration and speed.
- Process beats outcomes: a two-year selection timeline signals caution over delivery.
- Picking winners too early concentrates risk. Enabling many teams increases the odds of success.
Rethink "originality"
Insisting on "100% originality" ignores how modern AI is built-on shared foundations, with proprietary value layered on top. France's Mistral used open-source momentum to scale fast and compete globally. That is a playbook worth studying.
The global pace and stakes
Scale matters. Compute, data, chips, and energy capacity decide who ships useful models. Others are funding at national scale while we run a contest that adds delay and uncertainty. We can't match every dollar, but we can remove friction and direct resources where they move the needle.
Reported $100B AI infrastructure plan
Regulation: add oxygen, then add rules
The AI Basic Act takes effect on the 22nd with a vague "high-risk AI" category. Startups fear compliance drag without clear thresholds. Sequence matters: enable experimentation, then regulate what proves risky.
- Define "high-risk" by context and impact, not model type.
- Introduce time-bound sandboxes with clear exit criteria.
- Issue safe-harbor guidelines for startups under set size and revenue.
- Adopt proportional oversight tied to deployment scale and sensitivity.
What government should fund now
- Compute capacity and energy access: bulk contracts for GPUs and colocation; credits usable by startups, SMEs, and research labs.
- Shared, compliant datasets: defense, healthcare, and public services with strict privacy controls and auditable access.
- Reference architectures: secure templates for training, evaluation, and deployment to cut setup time and reduce risk.
- Sovereign data first: prioritize independence in defense and healthcare data pipelines over owning every core model component.
- Demand-side pull: pre-commercial procurement and prize challenges that fund milestones, not paperwork.
- Talent pipelines: fellowships and retraining tied to real projects within ministries and critical industries.
- Delivery muscle: a cross-ministry AI PMO with budget authority to clear blockers within days, not months.
A 90-day action plan
- Pause the audition. Replace it with rolling, milestone-based grants and compute credits for 50-100 teams.
- Update originality rules: allow open-source foundations with clear disclosure and differentiated value on top.
- Publish a tight definition of "high-risk AI," focused on outcomes that affect safety, rights, and critical infrastructure.
- Launch two sandboxes: healthcare data services and public-service copilots, with ministry sponsors and fixed timelines.
- Issue three pre-procurement challenges: defense decision support, medical imaging triage, and government document copilots.
- Release an RFP for national compute with reserved startup capacity and transparent pricing.
- Start an open-source participation program: pay maintainers, fund security audits, and require compliance tooling by default.
Metrics that matter
- Time-to-compute: days from approval to GPU access.
- Teams funded and graduation rate to paid pilots.
- Public-service deployments live within 6-9 months.
- Compliance cost per startup under the Act.
- Export wins and cross-border pilots.
- Safety incidents per deployment, with transparent postmortems.
Answers to common concerns
- "Open-source weakens security." It can improve security when paired with audits, red-teaming, and signed releases. The risk is unmanaged code, not open code.
- "We need original IP." Yes-for differentiators. But forcing originality across the stack burns time and money where reuse is smarter.
- "Selection ensures quality." Support plus verification ensures outcomes. Fund many, test hard, scale the few that deliver.
Bottom line
Act as a platform, not a judge. Fund compute, enable data access, clarify rules, and buy what works through outcome-based procurement. That's how we close the gap-by helping teams ship faster and safer, not by running a talent show.
If your team needs structured upskilling to run pilots and evaluate vendors, see practical programs here: AI courses by job.
Your membership also unlocks: