World's First AI Act Enforcement: Expectations, Concerns, and What Counsel Must Do Now
On January 22, 2026, the Basic Act on the Development of Artificial Intelligence and the Creation of a Trust Foundation took effect. It's the second comprehensive AI statute after the EU, but the first to be fully enforced. The law blends promotion with regulation, yet the market is fixated on the teeth: investigations, suspension orders, corrective measures, and administrative penalties.
The central tension is scope. Obligations center on transparency (Article 31), safety (Article 32), and heightened responsibilities for high-impact AI operators (Article 33). High-impact operators also face a voluntary-but practically expected-assessment of effects on basic rights (Article 35), plus the cost of staffing, governance, and controls.
Extraterritorial reach
The Act applies to foreign activities that affect Korea (Article 4(1)). If you provide AI-enabled services into the Korean market, you may be in scope even without a local entity.
Operator vs. user: draw this line first
"AI operators" are regulated entities; "users" generally are not. A company that buys an off-the-shelf AI interview tool and uses it in hiring is a user. A company that customizes a video/voice model into an interview system and offers it as a service is an operator, and should continue the analysis.
The 10 high-impact areas
Use in one of these areas is a threshold condition for "high impact." Outside these, you are likely out of the high-impact regime:
- Energy
- Drinking water
- Healthcare
- Medical devices
- Nuclear materials and facilities
- Criminal investigations
- Recruitment and loan screening
- Transportation
- Facilities and systems affecting essential services
- Public services tied to qualification verification and cost collection
Right now, most questions cluster around loan underwriting, corporate hiring, and digital medical devices. These are AI-ready and already tightly regulated, so they draw attention.
What makes an AI system "high-impact"
Even within those areas, a system must be capable of significantly affecting a person's life, body, or basic rights (Article 2(4)). Regulators published a 200+ page High Impact AI Judgment Guideline with area-specific flowcharts to help you self-assess.
The core signal is control. If AI decisions can be controlled and corrected through human intervention, the system is less likely to be "high impact." Fully automated decisions-no meaningful human in the loop-push you into high-impact territory.
Examples from the guideline: fully autonomous vehicles are basically high-impact. Grade 4 high-risk medical devices are treated as high-impact by default. Lower grades depend on whether clinicians can intervene and whether AI errors create serious risks to life or health.
Enforcement posture to expect
If violations are found-or complaints land-regulators can conduct administrative investigations. They can suspend the relevant actions, order corrections, and impose administrative penalties. With no global precedent, expect conservative scrutiny, especially where outcomes hit people's rights or safety.
Counsel's quick checklist
- Role mapping: For each AI use, decide if you are an operator or a user. Note any Korea-facing exposure, including cross-border service delivery.
- Area triage: Inventory systems that touch the 10 areas. Flag fully or largely automated decisions.
- Human-in-the-loop by design: Where feasible, place final decisions with humans. Build documented override and review mechanisms.
- Transparency (Art. 31): Prepare purpose statements, data sources, known limitations, and user-facing notices. Keep them current.
- Safety (Art. 32): Run structured risk assessments, testing, monitoring, and incident response. Track model changes and data shifts.
- High-impact governance (Art. 33): If you may be high-impact, assign accountable personnel, set up oversight committees, enforce logging, QA, and vendor controls.
- Basic rights assessment (Art. 35): For potential high-impact systems, plan and document an impact assessment methodology and remediation steps.
- Investigations playbook: Define intake and response for complaints, evidence preservation, regulator communications, and corrective actions.
- Contracts: Flow down obligations to vendors and integrators. Secure audit rights, incident notice, and model change disclosures.
- Hot zones: Give extra attention to lending, hiring, and digital medical device pipelines-these are already in the spotlight.
Your safest lever: human decision-making
Designing systems so humans make the final call cuts a large slice of regulatory risk. That's the clear throughline in the guideline. Keep oversight real, document it, and make intervention possible at critical moments.
Context and further reading
For comparison points, see the EU AI Act. If you're building compliance capability and need structured training, consider AI for Regulatory Affairs Specialists.
Your membership also unlocks: