Korea's AI Basic Act Takes Effect with a Light Touch and a Yearlong Grace Period

Korea's AI Basic Act takes effect Jan 22 with a light-touch first phase: label AI content, give prior notice for high-impact uses, and set a 30-day review flow. Start now.

Categorized in: AI News Government
Published on: Dec 25, 2025
Korea's AI Basic Act Takes Effect with a Light Touch and a Yearlong Grace Period

AI Basic Act to Take Effect Jan 22: What Government Leaders Need to Do Now

On December 24, the Ministry of Science and ICT reaffirmed a minimum-regulation approach for Korea's AI Basic Act, which comes into force on January 22. The focus is clear: support AI growth while applying only essential guardrails during the law's first phase.

The law introduces three core requirements: measures to foster the AI industry, a prior notice obligation for high-impact AI that affects fundamental rights, and source indication for AI-generated content. Implementation will start cautiously, with room to adjust as global policy evolves.

Grace Period: Caution First, Enforcement Later

The government emphasized a grace period of at least one year, with the option to extend. During this time, fact-finding investigations will be rare and reserved for severe cases, such as loss of life or serious human rights violations.

This stance aligns with signals from abroad. The EU is reviewing a pushback of its AI Act timeline to December 2027 amid competitiveness concerns. See current context from the EU institutions for reference: EU AI Act overview.

High-Impact AI: Narrow Scope, Clear Timelines

The scope of "high-impact AI" will be set conservatively and revisited as technology moves. Agencies will have 30 days to determine whether a system meets the "high-impact" threshold, with one possible extension to ease administrative load.

Expect more clarity via the enforcement decree and detailed guidelines before the law takes effect. A temporary AI Safety and Trust Support Desk will handle interpretations and compliance questions via an online FAQ, rather than a physical office.

What Public-Sector Organizations Should Do Before Jan 22

  • Inventory AI systems. List every tool, model, and vendor product that influences decisions, services, or public communications. Flag anything that could affect fundamental rights (welfare eligibility, recruitment, policing, healthcare triage, credit-like scoring, student assessment).
  • Draft a prior-notice template. Keep it short, consistent, and ready for systems that could fall under "high-impact." Include purpose, context of use, oversight, and contact point.
  • Set an AI-generated content label. Decide wording, placement, and exceptions. Update CMS templates, email footers, and report formats so teams don't guess per case.
  • Name accountable roles. Assign a responsible officer, create a small review group, and designate a point person to interface with the Support Desk.
  • Document data and model provenance. Keep version histories, training data sources where applicable, and change logs. This reduces scramble when questions arise.
  • Establish a 30-day "high-impact" assessment flow. Define who triggers it, who reviews, what evidence is required, and how to use the one-time extension if needed.
  • Update vendor clauses. Require disclosure of AI use, cooperation on high-impact assessments, labeling support, and timely responses aligned with the 30-day clock.
  • Prepare a serious-harm playbook. Specify thresholds for escalation (e.g., risk to life, severe rights violations), response steps, and reporting lines.
  • Brief managers and frontline staff. Keep it simple: what must be labeled, when to file prior notice, how to escalate incidents, and where to ask questions.

How to Use the Grace Period Wisely

  • Focus on systems closest to citizens and rights. Start with programs that influence eligibility, safety, or freedom of movement.
  • Pilot your labeling and notice process on one service. Fix friction points before you roll out widely.
  • Run a monthly review. Track open assessments, vendor responses, and any citizen feedback tied to AI use.
  • Watch international shifts. If EU timelines or definitions move, expect adjustments locally. Keep your process flexible.

Support and Next Steps

Final enforcement details and guidance will be issued before January 22. The Support Desk will provide centralized answers and a public FAQ to reduce uncertainty for on-the-ground teams.

If your team needs structured upskilling for AI policy, governance, and practical use, consider curated resources for specific roles: AI courses by job function.

Quick Checklist

  • Have you identified systems that may affect fundamental rights?
  • Do you have an approved label for AI-generated content across all channels?
  • Is your 30-day "high-impact" assessment workflow defined and staffed?
  • Are contract terms with vendors aligned to disclosure and assessment needs?
  • Do teams know when and how to escalate severe harm cases?
  • Who is your point of contact for the AI Safety and Trust Support Desk?

The policy intent is straightforward: keep innovation moving while ensuring clear responsibility where it matters most. Use this window to set light, durable processes that your teams can follow without confusion.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide