Korea's AI Framework Act Takes Effect Jan 22, 2026: What Dev and Legal Teams Need to Do Now
Korea will enforce a comprehensive AI regulatory framework next month, making it the first country to put a broad AI law into effect. The act establishes a national AI committee, mandates a basic three-year AI plan, and introduces safety and transparency requirements, including disclosure rules for certain AI systems.
Concern is rising across startups and SMEs. The enforcement decree is expected to land close to the effective date, leaving little time to adjust. A recent survey showed 98% of 101 local AI startups have no response system in place. Nearly half said they're unfamiliar with the law, and another half said they're aware but still unprepared.
What's in the law (at a glance)
- National AI Committee to steer policy and oversight.
- Basic three-year AI plan to set priorities and direction.
- Safety and transparency obligations, including disclosure for some AI systems.
- Mandatory watermarking/labeling for AI-generated content under discussion.
- Global context: The EU passed its AI Act but plans to apply most rules starting August, with some parts potentially delayed to 2027. See the EU's overview here.
Why the timeline is tight
The final enforcement decree may be published shortly before Jan. 22. That compresses implementation into weeks. Some companies may need to pause or change services at the last minute if they can't meet labeling or disclosure obligations on day one.
There's also a competitive angle. More Korean AI startups are eyeing Japan's voluntary governance approach, which many see as lighter-touch. For reference, Japan's guidance is public via METI's AI governance materials here.
Immediate prep for engineering, product, and security
- Inventory AI use: list all models, APIs, third-party services, and user-facing AI features. Flag which generate content or decisions that affect users.
- Labeling/watermarking: add feature flags for "AI-generated" markers. Evaluate content provenance standards (e.g., cryptographic signing) so you can switch methods if rules clarify.
- Disclosure-ready UX: prepare short, plain-language notices for AI features (what it does, limitations, human oversight, data use).
- Traceability: log model/version, inputs, prompts, system settings, training data sources (at a high level), and output decisions where feasible.
- Safety testing: run red-teaming and abuse tests for prompt injection, harmful output, privacy leakage, and bias. Document test coverage and mitigations.
- Guardrails: rate limits, content filters, feedback/report mechanisms, and kill switches for risky features.
- Data governance: map personal/sensitive data flows, retention, and deletion. Confirm licensing/consents for training and fine-tuning datasets.
- Vendor posture: collect model cards, eval summaries, and security attestations from providers. Bake disclosure requirements into contracts.
Immediate prep for legal, compliance, and policy
- Accountability: name an internal owner for AI compliance and a cross-functional working group (legal, security, product, infra).
- Monitor the decree: set alerts for the final enforcement decree and any FAQs/guidance from relevant ministries. Plan a same-day impact review.
- Draft disclosures: prepare templates for system-level transparency statements and user-facing notices so teams aren't writing them under deadline.
- Terms and privacy: update ToS/Privacy Policy to reflect AI features, data use, and user choices. Add content labeling language where relevant.
- Risk screen: rate features by potential harm/impact and document mitigations. Prioritize changes for high-impact systems first.
- Contingency plans: define criteria to pause or geofence features if requirements land in a stricter form than expected.
- Portability: where practical, align with elements of the EU AI Act approach (risk documentation, testing, transparency) to reduce rework across markets.
Watermarking: the sticking point
Mandatory watermarking is the hot-button issue. Labels may deter users even when many people contribute to AI-assisted content. Expect edge cases: partial AI edits, mixed human+AI workflows, screenshots, transcodes, or formats that strip metadata.
- Define "AI-generated" for your products (thresholds, transformations, human-in-the-loop).
- Decide on visible labels vs. embedded signals, and prepare fallback methods if metadata is removed.
- Document exceptions and escalation paths for ambiguous cases.
Open questions to track
- Will there be grace periods or a sandbox for smaller firms?
- Exact scope of "disclosure obligations" by system type and risk.
- Accepted watermarking methods and verification standards.
- How the national AI committee will prioritize oversight and guidance.
30-day action plan (lightweight)
- Week 1: System inventory, owner assigned, decree monitoring set up.
- Week 2: Draft disclosures and labeling policy; vendor document requests sent.
- Week 3: Implement flags for labels, logging, and a basic safety test suite.
- Week 4: Policy/ToS updates, contingency criteria, go/no-go on high-risk features.
What this means for strategy
- Ship fewer features, with better documentation and clearer controls.
- Treat transparency and safety checks as product requirements, not nice-to-haves.
- Design for portability: one compliance spine, multiple jurisdictions.
Helpful resources
- EU AI Act overview (for cross-market alignment): European Commission
- Japan's voluntary approach (context): METI AI Governance
Skill up your team
If you need focused upskilling for product, engineering, or legal teams working with AI, explore curated training paths by job role here: Complete AI Training.
Your membership also unlocks: