South Korea opens search for one more contender to build homegrown AI foundation models
South Korea's Ministry of Science and ICT has started a new process to add one more contender to its state-run push to develop local AI foundation models. This comes after the government shortlisted three consortiums led by SK Telecom, LG AI Research, and Upstage, while dropping proposals from Naver Cloud and NC AI last week.
The program will fund and support two final winners before year-end. The goal is straightforward: strengthen national AI capacity and move closer to being a top-three AI player globally.
What changed this week
The ministry reopened the door for an additional applicant among major local tech firms. Teams cut in previous rounds can apply again, though Naver and NC said they will not re-enter. Kakao and KT, removed earlier at the preliminary stage, also do not plan to participate.
At a Jan. 15 press briefing in Seoul, Second Vice Science Minister Ryu Je-myung underscored that the initiative is built for long-term competitiveness rather than short-term optics.
Who's still in - and who's out
- Shortlisted: SK Telecom, LG AI Research, Upstage
- Not reapplying: Naver Cloud, NC AI, Kakao, KT
Timeline and funding
Two final awardees will be selected before the end of the year. Winners will receive government funding and additional support to build and adapt large-scale models for a wide range of tasks.
In simple terms, a foundation model is a large system trained on broad data that can be adapted for many downstream uses. The program is meant to anchor national capability in this area and reduce strategic dependence.
Why this matters for government stakeholders
Public agencies will be key users and integrators. Decisions made now on data access, safety, and interoperability will set the ground rules for how government deploys AI across services, infrastructure, defense, health, and education.
The ministry also signaled a focus on ecosystem health over quick wins. That implies sustained funding, recurring evaluations, and room for mid-course corrections as capabilities and risks evolve.
What program leads should probe during evaluation
- Compute and efficiency: Training scale, energy use, and cost of inference for government workloads.
- Data governance: Lawful data sourcing, Korean-language depth and domain coverage, PII safeguards, and audit trails.
- Safety and reliability: Red-teaming, bias testing, content controls, and incident response processes.
- Security and sovereignty: Model and data residency, supply-chain assurances, and export-control compliance.
- Interoperability: APIs, on-prem and cloud options, and integration with existing government systems.
- Public value: Clear use cases in public service delivery and measurable outcomes.
- Ecosystem impact: Opportunities for SMEs, academia, and regional clusters to participate.
Next steps for agencies and SOEs
- Map high-impact pilot use cases (contact centers, document processing, code assistance, citizen services) and define success metrics now.
- Align 2026-2027 budgets for compute, data pipelines, and safety tooling.
- Stand up a cross-ministry working group for evaluation, testing, and procurement templates.
- Prepare data-sharing agreements and de-identification standards for training and evaluation.
- Launch workforce upskilling for AI product owners, policy, compliance, and MLOps roles.
Context and references
The ministry framed this push against sustained competition from the U.S. and China, emphasizing long-term ecosystem strength over quick headline results. For background on foundation models and policy alignment, see the resources below.
- Ministry of Science and ICT (MSIT) - English portal
- Stanford Center for Research on Foundation Models
- Complete AI Training - courses by job
Your membership also unlocks: