Implementation is the crucial next step for AI governance
Sri Lanka's digital economy contributes roughly 3-4% of GDP today. Hitting the $15 billion goal means lifting that share to about 12%, with AI expected to add $1.5-1.8 billion. The opportunity is real-but so are the risks.
AI is already embedded across health, retail, transport, finance, and e-commerce. Waiting for "later" is a risk by itself. With digital public infrastructure expanding and pressure to deploy AI growing, governance must move from theory to practice.
Why now: AI is already here
According to Merl Chandana, Team Lead for Data, Algorithms, and Policy (DAP) at LIRNEasia, the country is past the experimentation phase. Systems are influencing decisions, spending, and public services today. Oversight is no longer optional-it's operational work.
As more data flows through public platforms and private services, absent or ad-hoc safeguards can create real harm: unfair decisions, opaque pricing, weak recourse. The fix is straightforward: start implementing governance using the tools we already have.
The core approach: phased soft law
Sri Lanka's draft AI strategy backs a phased soft law model. Rather than pushing a sweeping new statute at once, it applies existing laws to AI risks and adds targeted guidance where needed. This is a capacity-aware path that can move now and adapt as technology changes.
The EU's experience with the AI Act shows how hard it is to finalize one big law, especially after generative AI changed the conversation midstream. Sri Lanka does not need to repeat that. It can act faster by using and extending current instruments while building capability.
No legal vacuum: use what exists, clarify what's missing
We already have a workable base:
- Constitution: equality and rights protections that apply to AI-related harms.
- Personal Data Protection Act (PDPA): limits collection, reuse, and repurposing of personal data; addresses solely automated decisions with significant effects.
- Right to Information Act: a path to transparency for public-sector AI use.
- Consumer Affairs Authority Act: a venue to address unfair algorithmic pricing and faulty AI products.
- Electronic Transactions Act, Computer Crime Act, and the pending Cybersecurity Bill: additional coverage for digital conduct and security.
The priority is enforcement and interpretation. Apply these laws to AI cases, then document the gaps. Fill those with guidance, standards, and targeted amendments-only where truly needed.
How the PDPA anchors AI accountability
The PDPA is not an AI law, but it sets guardrails where AI touches personal data. It requires accuracy, completeness, and proper purpose limitation-directly addressing biased training sets, outdated records, and questionable profiling.
Crucially, where decisions are made solely by automated means and have significant effects, individuals can request a human review and appeal refusals to the Data Protection Authority. That creates a pathway for oversight and redress without forcing full model disclosure.
Read the PDPA (Act No. 9 of 2022)
Consumer protection: algorithmic pricing and faulty AI products
We need a CAA that acts for the digital era. The law already prohibits unfair trade practices. That should include opaque or unjustified algorithmic discrimination-such as variable pricing based on location or inferred vulnerability, without transparency or recourse.
This can be done through interpretation and guidance first, not a full rewrite. Clear rules of the game build trust and encourage responsible deployment.
Innovation vs. regulation: strike the right balance
Good rules don't kill innovation; they protect it. The real threats are unclear obligations, high fixed compliance costs for low-risk use, and inconsistent enforcement.
Soft law supports proportionate oversight. Start with principles, risk tiers, and sector guidance. Reserve heavy requirements for high-risk use. Keep the rest lean.
Fairness: practical steps to cut bias
Fairness starts with equality under the Constitution and PDPA requirements for data quality. If your inputs are skewed or stale, your outputs will be too.
Back that with sector-specific guidance. For example, in healthcare, define data standards, testing protocols, and escalation paths for harm. Use independent review where the stakes are high.
Transparency that actually helps people
We don't need to expose source code to be transparent. We need clarity on where AI is used and how people can question outcomes that affect them.
- System-level transparency: publish a registry of public-sector AI use cases, including purpose, data types, and affected groups.
- Decision-level transparency: provide clear notices, factors that influenced a decision, and easy-to-use appeal mechanisms.
Independent audits add another layer. Publish summaries so the public sees what was tested-fairness, accuracy, and potential harm-without revealing sensitive details.
Reference: UK Algorithmic Transparency Recording Standard
Institutional oversight without extra bureaucracy
Don't build new empires. Use existing regulators that already handle data, digital systems, and compliance. Expand their scope to cover algorithmic risks with clear mandates and escalation paths.
As a complement, set up a non-statutory, multi-stakeholder AI governance council within an existing digital lead (e.g., GovTech or a relevant ministry). Task it with issuing guidance, coordinating across agencies, and convening industry and civil society.
Whatever the model, capacity building is the bottleneck. Judges, lawyers, regulators, auditors, and frontline officials need training on data rights, automated decisions, and practical assessment methods. For teams building capability, see curated learning paths by role: AI courses by job.
What Sri Lanka can borrow (and what it shouldn't)
Models don't transplant well without matching capacity. India has leaned into guidance-driven governance, supported by data protection law, central coordination, and sector engagement. That mix fits environments where resources are tight and speed matters.
Singapore's data protection agency has produced a model AI governance framework, practical implementation guides, an open-source testing toolset (AI Verify), and regulatory sandboxes. The lesson: strong institutions plus actionable tools beat sweeping promises.
Make it real: three near-term actions
- Publish the national AI strategy: move from draft to official policy so budgets, mandates, and timelines can flow.
- Stand up a dedicated coordination committee: small, skilled, and empowered-people with clear accountability and time to execute.
- Issue concise ethical and governance principles: plain language standards for fairness, transparency, accountability, and safe deployment.
The bottom line
AI isn't future tense in Sri Lanka-it's active. The tools to govern it are already on the table. Now it's about implementation: apply existing laws, publish guidance, train the people doing the work, and iterate.
As Merl Chandana notes, execution-not another draft-is what will protect citizens, build trust, and help the digital economy hit its targets.
Your membership also unlocks: