South Korea's 2026 AI Basic Law: High-Impact Rules, High Stakes for Startups
South Korea's AI Basic Law takes effect Jan 22, 2026, with a one-year grace period. Expect high-impact self-assessment, 10^26 FLOPs guardrails, and up to 3-month confirmations.

South Korea's AI Basic Law: What Legal Teams Need to Know Before 2026
South Korea is set to enforce its AI Basic Law on January 22, 2026, positioning itself as the world's second full AI framework after the EU. The aim is clear: build trust and safety. The worry from startups and academics is also clear: unclear definitions and heavy compliance could slow product cycles and deter investment. Your job is to reduce uncertainty before it lands on your balance sheet.
Key dates and scope
- Draft enforcement decree, two ordinances, five guidelines: legislative notice in October; finalization by December 2025.
- Law effective January 22, 2026, with a one-year grace period delaying fines.
- Scope includes "high-impact AI" duties and stricter guardrails for "high-performance AI."
"High-impact AI" is the risk pivot
Companies must self-assess whether a system is high-impact. Government confirmation may take up to three months, injecting delay into launches and fundraising. If designated high-impact, operators face dense obligations for up to five years.
- Risk management and mitigation
- Explainability and user protection
- Human oversight/supervision
- Documentation, disclosure, and sustained recordkeeping
Generative and open-source models add another wrinkle. A single high-risk application can trigger full compliance, even if the base model is general-purpose.
"High-performance AI": 10^26 FLOPs threshold
The draft defines high-performance AI as systems trained on 10^26 FLOPs or more, tightening safety measures on frontier models. This draws a bright line for large-scale training runs and may capture global models integrated by Korean services. Expect enhanced scrutiny around safety testing, monitoring, and safeguards before and after deployment.
Global context: stricter than many Asian peers
Korea follows the EU's lead, but with criteria that could be tougher than regional peers like Singapore and Japan. For founders and investors, the question is whether added friction is offset by legal clarity and market access.
For comparison, see the EU's approach under the AI Act, which tiers obligations by risk and provides detailed conformity processes. European Commission overview.
What startups and academics are asking the government to fix
- Carve-outs for non-risk use cases, modifiable outputs, and academic/artistic purposes.
- Swap company self-assessment with a government checklist; add a right to appeal or request reevaluation.
- Narrow the definition of "AI system" with a threshold for autonomy and adaptability.
- Startup-friendly compliance templates and simplified processes.
- Differentiate obligations for developers vs. service providers.
- Clarify "user" in B2B2C chains to prevent overlapping duties.
- Allow exceptions to transparency/watermarking where technical feasibility is limited.
As Professor Kim Hyun-kyung of SeoulTech noted, forcing companies to self-assess high-impact status creates uncertainty, and long confirmation windows can derail business schedules.
Counsel's action list for the next 12 months
- Inventory and classify AI systems now. Map use cases that could be high-impact. Flag any frontier-scale training or integration of models near the 10^26 FLOPs mark.
- Stand up a pre-deployment risk screen. Create a short, auditable checklist aligned to the draft obligations to reduce surprises during government confirmation.
- Build documentation baselines: model cards, data provenance logs, evaluation results, incident/risk registers, explainability notes, and a retention plan for five years.
- Contract for compliance: allocate duties between developer and service provider, define "user" responsibilities in B2B2C flows, add audit rights, reporting SLAs, and indemnities tied to misclassification or disclosure failures.
- Data and transparency: confirm lawful data sources, retention, and deletion; prepare transparency statements and watermarking plans, with documented feasibility limits.
- Generative/open-source governance: track downstream applications; document intended use, prohibited use, and safeguards; maintain a change log for model updates and fine-tunes.
- Plan for the three-month confirmation lag: bake contingencies into product roadmaps, launch gates, and term sheets. Use phased releases to manage regulatory risk.
- Appeals and reevaluation: design an internal process owner, evidence package, and timelines to contest or update high-impact status.
- Engage the consultation: coordinate comments with industry groups to press for clearer definitions, lighter startup processes, and risk-based proportionality.
Enforcement and exposure
The one-year grace period delays fines, not expectations. Regulators will still look for good-faith preparation: risk controls, documentation, and clear ownership. Treat 2025 as your build year for governance and evidence.
Strategic outlook
Korea's framework can strengthen trust if the final decree delivers clarity, proportionate duties, and predictable confirmation timelines. Without those, high-impact classification risk and documentation drag will weigh on early-stage companies and their investors.
The opportunity for legal teams is to install lean, evidence-ready processes now. If the government refines the rules along the lines proposed by industry and academia, Korea could become a strong venue for AI growth. If not, talent and capital will look elsewhere.