South Korea to enforce first comprehensive AI law next month as startups sound alarm

Korea's AI Framework Act takes effect Jan. 22, 2026, with safety, disclosure, and labeling duties. Legal teams should get moving: audit AI and draft labeling plans and controls.

Categorized in: AI News Legal
Published on: Dec 14, 2025
South Korea to enforce first comprehensive AI law next month as startups sound alarm

South Korea's AI Framework Act Takes Effect Jan. 22, 2026: What Legal Teams Need to Do Now

South Korea will enforce a comprehensive AI regulatory framework on Jan. 22, 2026. The law creates a national AI committee, mandates a basic three-year AI plan, and sets safety and transparency requirements, including disclosure duties for certain AI systems and mandatory watermarking of AI-generated content.

If maintained, this timeline would make Korea the first country to fully enforce an all-sector AI framework. By comparison, the EU's AI Act begins applying major provisions starting in August, with some rules sliding into 2027 amid industry pressure. See the EU's overview for context: European Commission: AI Act.

Why this matters for counsel and compliance

The enforcement decree may be finalized close to the effective date, which compresses preparation time for companies. Startups are especially exposed: in a recent survey of 101 local AI startups, 98% said they have no response system in place, with half unfamiliar with the law and the other half aware but unprepared.

Several firms warn they may need to change or suspend services after Jan. 22. There's also a growing push to launch in markets with lighter regimes, such as Japan's voluntary governance approach.

Core features of the Act (as indicated so far)

  • National AI Committee: Central oversight and coordination.
  • Three-year basic AI plan: Government-led planning cycle to set priorities and standards.
  • Safety and transparency requirements: Baseline obligations likely requiring process controls and documentation.
  • Disclosure duties for certain AI systems: Scope to be clarified in the enforcement decree.
  • Mandatory watermarking/labeling: AI-generated content must be labeled; implementation details and exceptions remain to be specified.

Practical risks to watch

  • Labeling ambiguity: Unclear thresholds and formats could create inconsistent practices and consumer confusion.
  • Market reaction: Labels may depress engagement even when human teams refine outputs.
  • Operational shock: Late-finalized rules can force abrupt product changes or pauses.
  • Multi-jurisdiction friction: Divergent approaches across Korea, the EU, and Japan complicate uniform rollout and vendor management.

30-day readiness checklist

  • Inventory your AI: List all models, APIs, and features that generate or materially shape content or decisions. Note user exposure and business criticality.
  • Map disclosure and labeling exposure: Identify user-facing outputs that may require tags or watermarks. Draft an interim labeling standard and escalation path.
  • Baseline safety controls: Document what you already do (testing, human review, abuse prevention, data controls). Flag gaps tied to transparency and safety.
  • Policy and notice updates: Prepare language for product UIs, ToS, and privacy notices to reflect disclosures and labeling without alarming users.
  • Vendor and model contracts: Add clauses covering watermarking support, logging, transparency artifacts, and change-notice obligations.
  • Governance quick-start: Stand up a cross-functional AI review group (legal, security, product). Define intake, documentation templates, and sign-off gates.
  • Change plans: Pre-draft kill switches and fallbacks for features most likely to be affected by late-breaking decree language.
  • Evidence file: Keep a dated record of decisions, risk rationales, and user communications. It will matter if regulators ask how you complied under time pressure.

Open questions to monitor as the decree lands

  • Which systems are "in scope" for disclosure duties, and what triggers apply?
  • Accepted standards for watermarking (format, durability, removal resistance) and any exemptions.
  • Consumer-facing versus B2B obligations, and treatment of open-source or third-party models.
  • Enforcement model: supervisory body roles, penalties, cure periods, and audit expectations.
  • Transitional relief or grace periods that could reduce immediate disruption.

Context: EU and Japan

The EU is phasing in obligations starting in August, with certain pieces stretching to 2027, reflecting pressure from industry and global competition. Japan continues to favor voluntary governance, which is drawing interest from Korean startups that fear strict labeling and tight timelines.

Board-level guidance

  • Set a budget and owner for day-one compliance, with weekly checkpoints until the decree is final.
  • Scenario-plan for feature suspension in high-risk surfaces; socialize the trade-offs with marketing and customer support.
  • Assess go-to-market sequencing across Korea, the EU, and Japan to avoid fragmented product states and create a clear audit trail.
  • Stay close to industry groups; shared interpretations and templates will save time once the decree is public.

The National Assembly passed the framework act on Nov. 26, 2024. With the effective date fixed, the legal edge goes to teams that get their inventory, labeling policy, and documentation in order before the decree drops.

If your team needs structured upskilling on AI risk, tooling, and documentation practices, see role-based learning paths here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide