Thoughts on AI and the Law: Stations Should Conform to SAG-AFTRA Principles of Disclosure and Pay
AI is no longer a side project for broadcasters. Radio has moved early on automation and AI, from playlist curation to voice tracking and production edits. Several industry voices have called 2026 a tipping point, with AI stretching from back-office tools to on-air roles, including synthetic DJs.
That progress comes with legal and reputational stakes. As AI gets closer to the microphone, stations need clear rules on disclosure, consent, and compensation - and a plan for political content in a heated election year.
Operational upside, human downside
AI reduces repetitive work and trims costs. But as "fewer human voices and more algorithms" creep in, stations risk losing the judgment, community connection, and trust that win listeners. Keep investing in human talent where creativity and accountability matter most.
Election-year risk: misinformation and liability
The 2024 deepfake robocall that mimicked President Joe Biden showed how fast AI can fuel misinformation. Reports indicate foreign actors are already testing new tools to sow division. One sloppy airing can damage a brand that took decades to build.
Treat AI-sourced content as high-risk. Political programming needs added controls because stations face unique constraints - including the no-censorship rule under §315 and mandatory access under §312(a)(7) for federal candidates.
SAG-AFTRA: disclosure, consent, and equal pay
SAG-AFTRA's position is clear: every person holds an inalienable right to their name, voice, and likeness. Use requires consent and just compensation. Synthetic or recreated performances should be paid on scale as if the person performed in person.
For stations and producers, that means no "silent" cloning, no buried consents, and no discounted rates because a model stood in. Bake disclosure, consent, and pay parity into your contracts, budgets, and on-air practices.
States move first: disclosure and "actual knowledge"
In 2025, all states and territories introduced AI bills and 38 states adopted roughly 100 laws. Many touch advertising, political content, and rights of publicity - all squarely relevant to broadcasters.
- New York: requires ad disclosures when AI-generated performers are used and requires consent from heirs or executors for posthumous name, image, or likeness licensing.
- California: prohibits knowingly distributing deceptive AI-generated election material and requires disclosures on electoral ads using AI-generated or materially altered content.
After concerns were raised about broadcaster obligations given §315 and §312(a)(7), California refined liability: stations are responsible when they have actual knowledge, and they must adopt and communicate a policy on political AI use and disclosure to ad buyers.
Federal activity: bills and an executive order to watch
Congress has floated several bills, including the No Fakes Act (S.1367), addressing deceptive digital replicas of a person's voice or likeness. You can track it here: Congress.gov - S.1367.
At the executive level, the Dec. 11 Executive Order 14365 sets out to promote U.S. leadership in AI and reduce friction from conflicting state rules. It creates an AI Litigation Task Force to evaluate and challenge state laws viewed as inconsistent with a minimally burdensome national framework.
- Within 90 days, federal officials must identify state AI laws for referral to the task force.
- States with onerous AI laws may be deemed ineligible for certain BEAD non-deployment funds.
- The FCC is to consider a federal reporting and disclosure standard for AI models that would preempt conflicting state rules.
- FTC to issue a policy statement applying its unfair and deceptive acts or practices authority to AI models. For context on existing FTC thinking, see this business guidance post: FTC: Keep your AI claims in check.
Federal preemption could reshape station obligations. Until then, compliance is local - and urgent.
What legal teams should do now
- Adopt an AI policy anchored to SAG-AFTRA principles: explicit disclosure, documented consent, and pay parity for synthetic or recreated performances.
- Map state AI laws by market. Prioritize New York, California, and any jurisdiction imposing ad disclosures or political-content restrictions.
- Stand up a political-content protocol that respects §315 and §312(a)(7) and the "actual knowledge" standard where applicable.
- Require advertiser and programmer certifications disclosing AI use in spots or programming. Condition acceptance on accurate, complete disclosure.
- Standardize on-air and digital disclosure language for AI-generated or substantially altered content. Keep receipts: scripts, audio, visuals, and certification records.
- Update talent, freelancer, and vendor agreements to cover AI cloning, reuse, approvals, compensation parity, indemnities, and posthumous rights.
- Add high-risk review gates: political ads, synthetic voices, impersonations, content featuring public figures, and post-production "voice swaps."
- Deploy content provenance tools where feasible (watermark detection, metadata checks), and log results tied to each asset ID.
- Train sales, programming, news, and traffic on the policy. Make the ad policy part of every insertion order and renewal.
Model policy components (quick-start)
- Scope: covers programming, ads, promos, podcasts, streams, and social posts distributed by the station.
- Definitions: "AI-generated," "substantially altered," "synthetic voice," "digital replica." Keep them consistent with state law where you operate.
- Human rights: no use of name, voice, or likeness without written consent and just compensation; synthetic performances paid on scale.
- Disclosure: clear, proximate, and understandable notices on-air and online when AI-generated or altered content is used.
- Political content: certifications required; no edits to candidate ads; actual-knowledge trigger procedures; immediate escalation paths.
- Recordkeeping: store consents, certifications, scripts, spots, logs, and airchecks for a defined retention period.
- Enforcement: penalties for noncompliance, including refusal, takedown, and notification to affected parties.
- Review: legal review for high-risk items; quarterly audits; update policy as laws change.
Certification checklist for programmers and advertisers
- Was AI used to create or alter any voice, image, or content? If yes, describe the scope.
- List all individuals whose name, voice, or likeness appears. Attach consents and payment terms.
- If political or electoral content, confirm required disclosures are embedded and accurate for the relevant state(s).
- Confirm no deceptive impersonation or materially misleading alteration of a public figure.
- Acknowledge the station's right to reject content for policy violations and to rely on this certification.
Practical next steps
- Identify a cross-functional AI compliance lead (legal + programming + sales).
- Publish the policy internally and to advertisers. Include it in rate cards and insertion orders.
- Run a 60-day audit of political, advocacy, and celebrity-voice content. Fix gaps and chase missing consents.
- Prepare holding statements for AI-related incidents (impersonation, mislabeling) to preserve trust if something slips through.
AI can make your operation leaner. It can also test your licensee obligations and your brand. With clear consent, equal pay for synthetic performances, and honest disclosure, you can use the tech without losing the audience.
If your team needs structured upskilling to spot risks and set policy, here's a curated catalog organized by role: Complete AI Training - Courses by Job.
Your membership also unlocks: