South Africa proposes AI insurance superfund to compensate victims of algorithmic harm
South Africa's draft National AI Policy includes a proposal to establish an AI insurance superfund that would pay out claims following financial or personal harm caused by AI-driven decisions.
The superfund model would operate similarly to existing compensation schemes in other sectors, creating a pooled fund to cover damages when AI systems cause injury or loss. This addresses a growing gap in traditional insurance frameworks, which have not yet adapted to cover AI-specific liability.
Why this matters for insurers
The proposal signals that regulators expect the insurance industry to play a central role in managing AI risk. Insurers will need to develop new underwriting standards, pricing models, and claims assessment procedures for AI-related incidents.
The superfund structure suggests the government may mandate industry participation rather than leaving AI liability entirely to voluntary market coverage. This could reshape how insurers price risk and structure policies around algorithmic decision-making.
What triggers a payout
The policy focuses on harm resulting from AI-driven decisions-such as loan rejections, medical diagnoses, or hiring determinations made by automated systems. Determining causation and quantifying damages in these cases will require new expertise within claims teams.
The draft does not yet specify thresholds for claims, how the fund would be capitalized, or which sectors would be covered first. These details will emerge as the policy moves through consultation phases.
Industry implications
Insurers should begin mapping their exposure to AI liability now. This includes reviewing client contracts, understanding how AI is deployed across their portfolios, and assessing whether current general liability policies adequately cover algorithmic harm.
The superfund proposal also suggests regulators expect transparency around AI systems. Insurers may need to require clients to disclose AI use, conduct audits of algorithmic systems, and establish reporting requirements for incidents.
South Africa joins other jurisdictions exploring AI liability frameworks, though few have moved beyond consultation stages. The country's approach could influence how other African regulators address the issue.
Your membership also unlocks: