Ottawa readies new online harms bill as part of broader response to deepfakes and youth safety
The federal government is preparing online harms legislation, with Culture Minister Marc Miller expected to introduce the bill. Artificial Intelligence Minister Evan Solomon said the proposal will be separate from a privacy bill he's bringing forward and from a justice bill tabled last year that targets some deepfakes.
Details from Miller's office are still pending. The direction is clear: stronger protections for children and clearer duties for platforms.
What's likely in scope
Solomon signalled that his upcoming privacy legislation could include a right to delete deepfakes: "We will be modernizing our privacy laws... I'm very interested in things like the right to deletion." He also pointed to "a suite of protections" to restore trust and protect Canadians from harmful online content.
The earlier Online Harms Act proposed in 2024 would have created an online regulator and required 24-hour takedowns for child sexual exploitation material and non-consensual intimate content, including deepfakes. That bill did not pass, and the government has indicated it won't return in the same form, but the core issues remain on the table.
There are also reports the new package could address children's access to social media, potentially setting an under-14 threshold. Miller did not confirm specifics when asked this week.
Unresolved questions government teams should watch
- Scope: Will the new bill recreate a regulator with enforcement powers, or distribute duties across existing bodies?
- Coverage gaps: Experts warn the justice bill may miss many non-consensual intimate deepfakes circulating on X. Will amendments close that gap?
- Timelines: Will there be 24-hour takedown obligations, safe-reporting mechanisms, or transparency requirements for platforms?
- Privacy interplay: How will a right to deletion for deepfakes be framed, and who carries compliance duties?
Why this matters for public servants
If enacted, these measures will trigger cross-department work across Heritage, Justice, ISED, Public Safety, Canadian Heritage portfolio agencies, the RCMP, and provincial partners. Expect new compliance pathways for platforms, coordination on takedown protocols, and clear lines for complaints handling.
Policy, legal, and program teams should prepare for operational guidance, resourcing, and service design implications-especially where youth protections intersect with platform accountability and privacy rights.
Immediate steps to get ahead
- Map roles: Identify your department's potential authorities under online harms, privacy, and criminal law.
- Scenario plan: Draft playbooks for 24-hour takedown requests, evidence handling, and cross-jurisdiction escalation.
- Data and privacy: Define processes for deepfake identification, user verification, and deletion requests; run DPIAs where needed.
- Stakeholder lines: Establish points of contact with major platforms for expedited requests and transparency reporting.
- Youth safety: Align with provinces, school boards, and child protection groups on reporting and support pathways.
- Comms readiness: Prepare clear public guidance for victims of non-consensual content, including step-by-step reporting.
Context and recent triggers
Solomon's comments follow questions about sexualized deepfakes attributed to Grok on X and the public backlash that followed. Advocacy groups for women and children have renewed calls for a regulator similar to the 2024 proposal, describing online risks to children as a national emergency.
What to read next
Capacity-building for teams working on AI policy and safety
If your unit is updating skills for AI risk, content integrity, or policy design, see AI courses by job for structured upskilling paths.
Bottom line: watch for an online harms bill from Heritage, a privacy bill with potential deletion rights for deepfakes, and possible tweaks to the justice package. Align internal plans now so you can execute quickly once the details land.
Your membership also unlocks: