Liberal AI bill leaves copyright questions hanging - what government needs to do now
The federal AI proposal is moving, but a key critique is loud and clear: it doesn't settle copyright. Solomon, a prominent voice on the file, argues the bill sidesteps how Canadian copyright law applies to AI training, outputs, and enforcement. If that reading holds, public servants will face policy gaps, procurement risk, and a tougher job guiding industry.
What the bill likely covers - and what it doesn't
The framework appears aimed at system risk, oversight, and accountability. Useful, but separate from the core copyright questions that creators, vendors, and agencies keep asking.
- Does training on copyrighted works require licences, or do existing exceptions cover it?
- Who owns AI-generated content, and can it be copyrighted?
- Who is liable if a model outputs infringing material?
- Should vendors disclose training data sources and provenance?
- How are creators compensated if their works were used in training?
Why this gap matters for government
Procurement teams risk buying tools trained on murky datasets, with legal exposure downstream. Creators push for compensation and transparency, while vendors want clear rules before they scale. Without clarity, compliance teams spend more time interpreting than implementing.
Legislative options Parliament could consider
- Spell out whether text-and-data mining for model training is permitted, licensed, or conditionally exempt.
- Require auditable disclosure on training data provenance and data governance practices.
- Create opt-out/opt-in mechanisms and consider collective licensing for broad, efficient rights clearance.
- Define liability rules for infringing outputs, with safe harbours tied to due diligence and rapid takedown.
- Back disclosure with enforcement tools: penalties for misrepresentation, repeat-offender escalations, and audit powers.
- Support watermarking/detection standards to help with tracing and remedial action.
What departments and agencies can do now
- Map use cases. Flag where models generate public-facing content or use third-party datasets.
- Update procurement. Require vendor attestations on data rights, training sources, and indemnities for IP claims.
- Prefer vendors with documented provenance, red-teaming for IP leakage, and an incident response plan.
- Run privacy and IP impact assessments for high-risk deployments; document decisions and mitigation steps.
- Establish an approval pathway for new AI tools with legal, privacy, and program leads at the table.
- Engage creators, rights collectives, academics, and industry to stress-test options before committee stage.
What to watch next
Expect pressure for committee study, targeted consultations, and potential amendments or a parallel copyright process. Departments should prepare draft procurement clauses, template vendor questionnaires, and briefing notes that outline trade-offs between innovation, rights protection, and enforcement.
Key references
Practical upskilling
If your team needs fast, role-based AI training for governance, policy, and procurement practice, see these curated pathways.
Bottom line: the AI bill can set guardrails for system risk, but copyright needs its own lane. Government can reduce near-term risk with smarter procurement and clear internal approvals while Parliament works the policy details.
Your membership also unlocks: