Ontario's AI Law Lacks Teeth, Privacy Watchdog Warns
Ontario's flagship artificial intelligence law is an "empty shell" without the regulations needed to enforce it, leaving public-sector organizations to guess how they should govern AI systems already in use.
Christopher Parsons, director of research and technology at Ontario's Information and Privacy Commissioner, made the warning at a NetDiligence cyber conference. The Enhancing Digital Security and Trust Act (EDSTA), passed in November 2024, sets out a framework for regulating AI in public-sector bodies but contains no binding safeguards.
"The legislation itself doesn't have those protections built in," Parsons said. "The key protections must come from standards or regulation emerging from the legislation."
EDSTA is meant to govern how public agencies use AI systems, including generative AI and agentic systems. But the rules that would define what "responsible" use means have not been drafted. That gap matters now: ministries and agencies are already experimenting with these tools.
Three AI Threats Insurance Needs to Understand
Parsons highlighted three AI-driven risks that affect how insurers assess cyber coverage and claims.
Prompt injection occurs when attackers embed hidden instructions into AI inputs to trick systems into ignoring safeguards or revealing confidential data. No traditional breach occurs-an AI simply does what it was tricked into doing. From an insurer's perspective, this creates ambiguity: does the incident count as a data breach if no hacking happened?
Data or model poisoning involves manipulating training data to corrupt how AI models behave. Poisoned models make bad automated decisions, corrupt records, and trigger cascading operational failures. But these failures lack the clear perimeter breach that traditional cyber policies are built around.
Excessive agency means giving AI systems too much autonomy. Agentic systems wired into email, databases, or benefits systems can collect, share, or modify personal information at scale without human oversight-creating privacy violations and data integrity problems that fall outside conventional cyber incident definitions.
What Organizations Should Do Now
Until Ontario's regulations materialize, Parsons said organizations should follow principles already issued by Canadian privacy commissioners: AI systems must be valid, reliable, safe, privacy-protective, transparent, and accountable.
Practically, this means:
- De-identify or use synthetic data when training models, so statistical value is preserved without exposing real individuals
- Use clear contracts with AI vendors about whether personal information can be used for training
- Conduct privacy and algorithmic impact assessments before putting systems into production
- Build cross-functional teams-technologists, lawyers, policy advisers, and communications staff-to assess AI together rather than leaving decisions to technologists alone
Parsons emphasized that AI governance cannot be a technical problem solved by engineers. "The choices we make now will shape how these technologies evolve and whom they ultimately serve," he said.
For insurers, that means understanding these three threat categories early. Traditional cyber policies may not cover losses from prompt injection, data poisoning, or excessive agency. As organizations deploy AI faster than regulations can catch up, the gap between coverage and actual risk will widen.
Learn more about Generative AI and LLM Courses and AI for Insurance to build expertise in these emerging risks.
Your membership also unlocks: