Deepfake doctors are back - and they're selling supplements. Here's your PR response plan
An investigation has uncovered hundreds of AI-manipulated videos on TikTok and other platforms that impersonate real doctors to push supplements and spread health misinformation.
Clips feature real footage of medical professionals, but the visuals and audio are altered so the speakers endorse products like probiotics and Himalayan shilajit for menopause symptoms. Many of these videos direct viewers to a US supplement seller called Wellness Nest or a linked UK outlet.
What happened
Full Fact found large volumes of deepfakes that hijack doctors' likenesses and reputations to sell unproven remedies. The content spans TikTok, X, Facebook, and YouTube, often using conference or broadcast footage as source material.
Those targeted include Prof David Taylor-Robinson and Duncan Selbie, both depicted making menopause claims they never made. Some clips even inserted swearing and misogynistic lines into repurposed footage. TikTok removed several videos after complaints and said this violates its rules, but takedowns took time.
Wellness Nest told investigators the deepfakes were unaffiliated and said it doesn't use AI-generated content, citing a lack of control over global affiliates.
Why this matters for PR and communications
Deepfakes erase context and borrow authority. They can seed false health claims, damage reputations, and undermine trust in experts - fast.
For brands, the risks include impersonation, false endorsements, and association with unproven or harmful products. For institutions, it's a credibility tax: you're forced into reactive mode while the fake spreads.
What to watch for in suspect videos
- Misaligned expertise: a child health specialist suddenly giving menopause advice.
- Odd lip sync, flat affect, or audio artifacts that don't match room acoustics.
- Overlays and links pushing a single storefront across multiple accounts.
- Grand, specific promises ("deeper sleep in weeks," "fewer hot flushes") without sources.
- Identical scripts or visuals reposted by different pages, often low-follower or newly created.
- Out-of-character language, swearing, or sensational claims.
Immediate actions if your experts or brand are faked
- Collect evidence: screen recordings, URLs, account IDs, timestamps, and search terms used to find the content.
- File platform reports citing synthetic media/impersonation and harmful health misinformation. Prioritize the original uploads, then the largest accounts.
- Issue legal notices for false endorsement, trademark, and likeness misuse as appropriate.
- Publish a short, clear statement on official channels: what's fake, what's true, and where to find verified information.
- Alert partners and press with a single source of truth (FAQ, drive folder with verified clips) to prevent re-amplification.
- Track re-uploads with saved hashes, keywords (expert name + "supplement," "menopause," brand terms), and social listening.
- Escalate to platforms via partner/agency contacts to speed removal cascades.
Build a standing playbook before you need it
- Verification hub: list all official handles, websites, and spokesperson profiles in one public page you control.
- Content authenticity: watermark long-form videos, keep an "official clips" folder for media to reference, and publish transcripts.
- Crisis templates: pre-approved language for community posts, press notes, and partner emails; include visuals of what the fake looks like.
- Influencer and affiliate clauses: ban synthetic endorsements and require takedown cooperation within hours, not days.
- Spokesperson prep: counsel experts on how to respond calmly and where to direct inquiries.
Detection stack for comms teams
- Search tactics: combine names with product terms (e.g., "[Expert] + probiotic," "shilajit," "menopause"). Check video captions and hashtags.
- Visual checks: look for inconsistent lighting on faces, unnatural teeth/tongue frames, jumpy glasses frames, or blurred earrings/hair.
- Audio tells: robotic sibilants, breaths that don't match mouth movement, or oddly clean audio in noisy rooms.
- Forensics basics: run reverse video/image searches and compare against your library of verified footage.
Policy, platforms, and escalation
Map each platform's policy on synthetic media and deceptive practices, plus the fastest reporting route. Pre-store policy links in your playbook and cite them in reports to reduce back-and-forth.
- Full Fact - investigations, debunks, and guidance.
- TikTok synthetic media policy - rules and reporting guidance.
Training your team
Make deepfake response part of media training. Your team should know how to spot signals, file airtight reports, brief spokespeople, and publish clear corrections fast.
If you need structured upskilling for PR roles, see practical AI and misinformation courses by job at Complete AI Training.
Key takeaway
Treat synthetic endorsements as a brand safety incident, not a social media nuisance. Speed, clarity, and a ready-made playbook will protect your experts, your audience, and your reputation.
Your membership also unlocks: