South Korea's AI Basic Act takes effect January 22: legal teams face an unclear playbook
South Korea's AI Basic Act goes live next month, and the industry says it's not ready. Startups and large firms alike point to vague definitions and unclear thresholds, especially around "high-impact AI." A one-year suspension of fines is on deck, but companies argue it doesn't fix the uncertainty that's stalling product launches and compliance planning.
The result: legal and compliance teams are being asked to build a program without a final blueprint. With less than a month to go, the immediate need is structure, triage, and documented judgment calls.
What "high-impact AI" likely covers
The law flags systems that can materially affect life, safety, or fundamental rights. Examples include energy supply, the use of biometric data in criminal investigations, and other services tied to physical safety or core civil liberties. Providers in scope will face pre-assessment duties, ongoing risk management, and disclosure when content is AI-generated.
The catch: firms must self-assess whether they fall into "high-impact," but the categories and thresholds remain broad. That ambiguity compounds risk for teams trying to set up a defensible program in weeks, not quarters.
Startups: the highest exposure with the least capacity
Early-stage companies say the law's bar sits well above "general AI," pushing them to avoid sensitive sectors. Health care and education-where many startups operate-can slide into high-impact quickly. In a recent survey, only about 2% had concrete response plans; the other 98% had none.
For counsel, that means triage first: identify where product features cross into safety, rights, or biometric territory, and lock down a risk file before expansion.
Large firms: Korea-only frameworks and paused launches
Enterprises are bracing for Korea-specific compliance stacks, separate from global programs. That fragmentation raises cost and can delay releases. Several firms are pausing launches in-country until definitions and enforcement posture firm up.
Labeling AI-generated content is another friction point. The question isn't whether labels exist-it's whether they protect users in practice, what "clear disclosure" looks like at scale, and how to handle synthetic blends.
Grace period: fines paused, uncertainty isn't
The government plans to suspend fines under the Act for a year to soften the landing. Industry response: helpful, but not enough. The risk of complaints, investigations, and operational drag still looms, and the opportunity cost of delaying products is real.
30-60 day action plan for legal and compliance
- Inventory your AI systems and features. Catalog models, data sources, use cases, affected users, and decision impact. Flag anything touching safety, rights, or biometric data.
- Run a fast-track classification. Draft criteria for "high-impact" based on the Act's examples and internal risk thresholds. Crosswalk where possible with the EU AI Act to reuse controls and evidence across regimes. Reference: EU AI Act overview
- Stand up a risk management plan. Define human oversight, safe-ops defaults, incident response, red-teaming cadence, and rollback conditions. Document assumptions and residual risk.
- Labeling and disclosure. Decide where and how disclosures appear (UI, watermarking, metadata, file headers). Create an SOP for generated text, images, video, and mixed content.
- Data and rights. For biometric or sensitive data, lock down legal bases, necessity, proportionality, retention, and audit logs. Coordinate with privacy leads to avoid conflicts with sector laws.
- Third-party controls. Add contract clauses for provenance, model change notices, audit rights, and incident timelines. Keep a vendor register with model versions and dependencies.
- Governance and accountability. Appoint an accountable owner, set approval gates for launches, and require a "go/no-go" memo for high-impact features. Maintain a central record of assessments.
- Geofence and sandbox. If scope is unclear, geofence features, run limited pilots, and collect safety metrics during the grace period.
- Board and regulator engagement. Brief leadership on exposure, timelines, and trade-offs. Prepare Q&A for likely regulator asks.
Key questions to resolve with regulators
- What are the operative thresholds for "major risk to life, safety, or fundamental rights" in common scenarios?
- Does "high-impact" attach to the model, the use case, or both? How are general-purpose models treated when embedded in high-impact services?
- What satisfies "clear labeling" for multimodal outputs, API responses, and user-shared content?
- How does this interact with sector rules (medical devices, energy, law enforcement data)? Which law governs conflicts?
- During the fine suspension, what enforcement tools will still be used (inspections, corrective orders, complaint handling)?
Documentation to start now
- System cards/model cards with purpose, data, risks, and mitigations
- Risk assessments for safety, bias, and rights impact
- Data lineage, training/evaluation datasets, and consent records where applicable
- Test plans, red-team reports, and known limitations
- Labeling and disclosure SOPs with UI examples
- Incident response playbooks and escalation matrix
- Vendor register, contracts, and audit evidence
- Training records for engineers, product, and support staff
Strategy: reduce uncertainty while you wait for guidance
- Align your control set with the EU AI Act where feasible to avoid duplicate work later.
- Sequence releases: ship low-risk features first; hold high-impact until criteria are clearer.
- Build "off switches" and monitoring so you can pivot without a full rollback.
- Join industry groups to push for concrete definitions and practical labeling standards.
Upskill your team
If your engineers and product owners need a shared baseline on AI risk and governance, consider short, role-specific training to cut rework and improve documentation quality. A curated option by role is available here: Complete AI Training - courses by job.
The headline hasn't changed: the Act arrives January 22, the fines can wait, and ambiguity remains. Legal teams that move first on classification, documentation, and labeling will buy their companies time-and options-when enforcement starts in earnest.
Your membership also unlocks: