South Korea enacts first comprehensive AI safety law: what legal teams need to know
South Korea has passed the "Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trust" (AI Basic Act), and it has taken effect. The law establishes duties for companies and developers to curb deepfakes and disinformation and gives the government clear enforcement powers.
For counsel advising on AI products or distribution into Korea, this is now a live compliance regime with new disclosures, watermarking, and local representation requirements for qualifying providers.
Core obligations at a glance
- Accountability for synthetic content: Companies and AI developers are responsible for preventing and responding to deepfakes and disinformation created with their systems.
- Enforcement authority: The government may impose fines and open investigations for violations.
- High-risk AI: Systems that generate content likely to significantly affect people's lives or safety require user warnings, and companies are responsible for safety measures.
- Watermarking: All AI-generated content must be watermarked. The Ministry notes these are baseline safeguards and may evolve.
Local representative requirement for large providers
- Who must appoint: International AI companies with annual revenue of at least 1 trillion won (about $681 million), domestic sales of 10 billion won, and at least 1 million daily users in Korea.
- Current scope: The thresholds currently capture only Google and OpenAI.
- Sanctions: Failure to appoint a local representative can lead to fines of up to 30 million won.
What "high-risk AI" means
The Act introduces "high-risk AI" as models or services that generate content with the potential to materially affect people's lives or safety. Users must be warned, and providers remain responsible for safety practices around deployment and use.
Action list for legal and compliance teams
- Map AI systems and features deployed in Korea; flag any that could affect life, health, or safety.
- Implement watermarking for all AI-generated outputs and maintain evidence of application and detection capability.
- Draft and surface clear user warnings for high-risk use cases; log consent and acknowledgments where appropriate.
- Update incident response to address deepfake/disinformation misuse, including takedown and notification pathways.
- Review vendor and distribution contracts to allocate duties for watermarking, disclosures, and cooperation in investigations.
- Assess whether your organization meets the thresholds for a local representative; if yes, appoint and document authority and contact points.
- Establish recordkeeping to demonstrate compliance and readiness for government inquiries.
Government policy cycle and support
The Act includes measures to promote AI industry development. The Minister of Science must present an AI policy plan every three years, signaling ongoing updates to guidance and support programs.
What to watch next
- Secondary rules or technical standards on watermarking and provenance.
- Clarifications on what qualifies as "high-risk" across sectors.
- Early enforcement actions and investigation procedures.
For official updates and notices, monitor the Ministry of Science and ICT. For teams building internal capability on AI governance and compliance skills, see curated learning paths by role at Complete AI Training.
Your membership also unlocks: