South Korea's AI Basic Act: What Legal Teams Need to Know Now
South Korea has enacted a sweeping AI Basic Act, billed domestically as the first fully enforced national AI framework. It took effect in late January 2026 and aims to promote industry while putting guardrails on high-risk use. The law is already drawing fire from both sides: startups say it overreaches, civil society says it under-protects.
The core obligations at a glance
- Content labelling: AI services must add invisible watermarks to clearly artificial outputs (e.g., cartoons, artwork). For realistic deepfakes, visible labels are mandatory.
- High-impact AI duties: Systems used in medical diagnosis, hiring, and loan approvals require documented risk assessments and explainability records. If a human makes the final decision, the system may sit outside this category.
- Very large models: "Extremely powerful" models must file safety reports, but the threshold is set so high that no known model currently qualifies, according to officials.
- Penalties and grace period: Fines up to 30m won (about £15,000). The government has promised at least a one-year grace period before penalties apply.
Key friction points for counsel
- Self-classification risk: Companies must determine if their system is "high-impact." This invites inconsistent interpretations and prolonged internal reviews. Expect auditors and counterparties to ask for your rationale.
- Human-in-the-loop ambiguity: If a human signs off, some systems may avoid high-impact status. That creates a potential loophole and a design temptation to add nominal human review. Regulators may scrutinize whether oversight is substantive.
- Competitive asymmetry: All Korean companies are covered regardless of size. Foreign providers only trigger duties above certain thresholds (think big platforms). SMEs may view this as a burden that disadvantages them locally.
- Watermark practicality: Invisible watermarks can degrade through editing or compression. Visible labels on realistic deepfakes can be evaded by bad actors outside the law's reach. Expect enforcement to lean on platforms and distributors.
Why the law exists-and why critics say it falls short
South Korea has dealt with severe harms from AI-generated sexual imagery. A 2023 report estimated the country accounted for 53% of global deepfake pornography victims. In 2024, investigators exposed large Telegram networks producing and distributing AI sexual content, foreshadowing later scandals involving popular chatbots.
Civil society groups argue the Act centers on institutional "users" (hospitals, banks, public bodies), not the individuals affected by AI. They note there are no outright bans on specific AI systems, and human-involvement carve-outs weaken accountability. The national human rights commission has also flagged vague definitions around "high-impact" use cases.
The government's stance
Officials say the framework is 80-90% focused on industry promotion, with regulation calibrated to reduce legal uncertainty. The Ministry of Science and ICT has promised further guidance and iterative updates to the enforcement decree and notices as issues surface.
Experts emphasize Korea is taking a principles-based path distinct from the EU's strict risk tiers, the US/UK's sector-led approaches, and China's service-specific controls. Melissa Hyesun Yoon describes it as "trust-based promotion and regulation" designed to evolve with practice.
Comparison snapshot
- EU: Prescriptive, risk-based model with defined prohibited practices and tiered obligations. See the EU AI Act overview.
- US/UK: Heavier reliance on existing sector regulators and market remedies.
- China: Detailed rules targeted at generative services and recommendation systems, aligned with broader industrial policy.
- South Korea: Principles-led, promotion-heavy, with selective hard requirements (labels, risk assessments) and room for iterative guidance.
Immediate action items for legal and compliance teams
- Map use cases: Inventory AI features touching healthcare, employment, and credit. Document whether a human truly makes final decisions, and what "meaningful oversight" looks like.
- Decide on high-impact status: Build a defensible self-assessment methodology with criteria, thresholds, and sign-offs. Treat it like a living document.
- Stand up risk assessments: Define model purpose, training data sources, known risks, and mitigations. Capture decision logic and explainability artifacts users can understand.
- Implement labelling: Add invisible watermarks for artificial outputs and visible labels for realistic deepfakes. Test persistence across formats, edits, and platform uploads.
- Contract for compliance: Update vendor and model-provider contracts to allocate duties for watermarks, disclosures, logs, and incident response.
- Prepare for audits: Centralize records: versioned models, prompts and outputs (where lawful), risk logs, incident reports, and user-facing notices.
- Plan for grace-period enforcement: Build a 6-12 month roadmap with milestones for policy, engineering, and training. Don't wait for penalties to start.
What remains unclear
- High-impact definitions: Expect further guidance on borderline functions like underwriting tools with human review or triage systems in clinical settings.
- Cross-border scope: How thresholds will capture foreign providers, and how enforcement will work for services accessed in Korea.
- Forensic standards: Which watermarking methods regulators deem acceptable, and how disputes over authenticity will be resolved.
- Liability allocation: How responsibility splits between model developers, integrators, and enterprise users in complex stacks.
Signals to watch
- Updated enforcement decrees and notices from the Ministry of Science and ICT. Track clarifications here: MSIT (English).
- Early investigations focused on labelling compliance and misclassified high-impact systems.
- Court challenges testing the human-in-the-loop carve-out and standing for individuals harmed by AI outputs.
- Potential amendments adding prohibited practices if harms persist, especially around synthetic sexual imagery.
Bottom line for legal teams
The Act is promotion-first but real obligations bite in content labelling and high-impact governance. Self-classification is the immediate legal risk: build a clear standard, apply it consistently, and evidence your calls. Use the grace period to get policy, contracts, and technical controls production-ready.
If your organization needs structured upskilling for AI governance and risk, see our role-based resources: AI courses by job.
Your membership also unlocks: