Government to Crack Down on AI-Generated Fake Doctors in Medical Ads
The government announced new measures to stop false and exaggerated advertising that uses AI and related technologies. The plan was adopted at the 7th National Policy Coordination Meeting, chaired by Prime Minister Kim Min-seok.
Officials cited growing use of AI-generated experts and deepfakes in food and pharmaceutical advertising, noting the potential impact on older adults and market order. Because these ads spread fast across social platforms, the government will act to block their creation and distribution and impose stronger penalties on violators.
What's in the plan
- Mandatory AI content labeling (from January 2026): Anyone who produces, edits, or posts AI-generated photos or videos must label them as AI-generated. Users are prohibited from removing or altering labels. Platforms must verify that providers meet labeling obligations. The Ministry of Science and ICT will issue implementation guidelines to support compliance and transparency. Ministry of Science and ICT
- 24-hour review for high-risk sectors: Ads in food, pharmaceuticals, cosmetics, quasi-drugs, and medical devices will be subject to written review within 24 hours of request.
- Clear rules on AI "experts" in ads: Product recommendations made by AI are likely to be viewed as unfair labeling or advertising unless the recommender is clearly identified as a virtual human. In food and pharmaceutical categories, AI-generated "doctors" endorsing products may be treated as deceptive advertising.
- Stronger sanctions and deterrence: Punitive damages of up to five times actual damages for distributing false or manipulated information through information and communications networks. Administrative fines for false or exaggerated ads will increase under the Labeling and Advertising Act. The Korea Communications Commission and the Fair Trade Commission will coordinate enforcement. Korea Fair Trade Commission
- Proactive monitoring: The Ministry of Food and Drug Safety and the Korea Consumer Agency will enhance monitoring and detection through inter-ministerial coordination to block false or exaggerated AI ads promptly.
Implications for public officials and platforms
- Regulators: Prepare sector-specific guidance aligning with labeling obligations, unfair advertising criteria, and review timelines. Define workflows for 24-hour reviews and evidence standards for AI-generated endorsements.
- Platform operators: Implement label verification, audit trails, and automated checks to detect removed or altered labels. Update terms of service to ban label tampering and require disclosure when AI or virtual humans are used.
- Advertisers and agencies: Avoid AI "doctor" or "expert" personas in restricted categories. Where virtual humans appear, disclose clearly and persistently within the creative and metadata.
- Consumer protection teams: Focus outreach on older adults and high-risk channels. Improve complaint intake, triage, and case-sharing with enforcement teams.
Operational guidance and timeline
The labeling system is scheduled to take effect in January 2026. Before then, the Ministry of Science and ICT will release guidelines that platforms and AI businesses can implement and test at scale.
Agencies should set up rapid review procedures for the five priority sectors, with clear escalation to the Fair Trade Commission and Korea Communications Commission for sanction decisions. Data-sharing and joint monitoring with the Ministry of Food and Drug Safety and the Korea Consumer Agency will be essential to stop repeat offenders.
Action checklist
- Map current ad flows (owned, paid, influencer, affiliate) and flag any AI-generated personas or synthetic media.
- Build or procure labeling pipelines that persist across uploads, edits, and reposts; log compliance events for audits.
- Deploy content detection and sampling to spot missing or tampered labels; establish penalties within platform policies.
- Update creative review standards for food, pharma, cosmetics, quasi-drugs, and medical devices to reflect the 24-hour review rule.
- Train policy, review, and enforcement teams on the new criteria and evidence collection for AI-generated endorsements.
- Budget for legal, technical, and monitoring capacity to meet the 2026 deadline and ongoing enforcement needs.
"Through these measures, we aim to address challenges associated with new technologies while maintaining market order in the AI era," Prime Minister Kim said. The government will advance the legislative and institutional changes while maintaining open communication with platforms and consumer groups.
If your team needs structured upskilling on AI transparency and compliance workflows, explore job-specific options here: AI courses by job.
Your membership also unlocks: