Latvia's AI Gap: Data, Skills, Trust - A Practical Plan for Government, IT, and Development
Latvia has momentum in digital transformation, but AI progress is slowed by four things: limited access to quality data, low digital skills, funding constraints, and public distrust. The good news: each has a clear fix. Here's a concise plan grounded in current studies and policy discussions.
Data Access: From Portals to High-Value, Usable Datasets
Training modern AI needs large, clean, and permissioned datasets. Most private-sector data remains closed, and public data is uneven in quality and format. Latvia's Open Data Portal has been standardizing access since 2017 and is a solid foundation.
Make it operational:
- Publish-by-default for high-value public datasets with APIs, SLAs, and versioning. Prioritize health, mobility, education, culture, and procurement datasets.
- Stand up secure public-private data-sharing agreements using privacy-preserving methods (anonymization, synthetic data, federated learning) and clear legal bases.
- Fund data stewards in ministries and municipalities to enforce standards, metadata, and quality checks.
- Create a national catalog of AI-ready datasets with licensing, provenance, and benchmark splits for research and pilots.
Digital Skills: From Awareness to Hands-On Capability
Only 45.3% of residents had at least basic digital skills in 2023, below the EU average of 55.6% and lower than 2022. The national AI plan points to public awareness, Finland-style online courses, research programs, and education reform. That's useful, but it needs a delivery engine.
- Launch a civil service AI literacy track with role-based modules (policy, legal, data, engineering) and micro-credentials.
- Set up agency sandboxes with real datasets, guardrails, and evaluation checklists to build practical skills quickly.
- Require vendors to include on-the-job training in AI procurements; measure completion and competency, not attendance.
- Offer regional bootcamps for municipalities and cultural institutions focused on real workflows (content, research, analytics).
Need structured upskilling paths by job role? See curated programs at Complete AI Training - Courses by Job.
Funding: Lower the Barrier for Pilots and Scale-Ups
In 2024, 69% of creative-sector representatives cited funding as the main obstacle, followed by IP protection and digitization. Yet adoption is underway: 77.4% use AI, 20% use cloud, 19.3% use VR, and 18% use blockchain solutions. Funding should meet momentum with structure.
- Introduce AI vouchers and co-funding for SMEs and cultural institutions: data cleanup, model fine-tuning, and integration.
- Negotiate shared compute and cloud credits with providers for public interest and research use.
- Publish sector playbooks and replicable pilots (e.g., digitization for archives, recommendation systems for cultural content).
- Provide simple IP templates and guidance for generative workflows and dataset licensing to reduce legal friction.
Public Trust: Tackle Reliability, Fakes, and Security Head-On
Residents are concerned about misinformation, falsifications, and security risks. 68% rate their AI knowledge as low, and 73% feel under-skilled to use AI tools. Address risk with visible safeguards, not slogans.
- Adopt content provenance standards (e.g., C2PA) for public sector media and mandate vendor support in contracts.
- Publish a public AI risk register and incident log; show how issues are found, fixed, and prevented.
- Run a national "trust the source" campaign with practical checks for citizens and journalists.
- Set baselines for secure use: data retention, red-teaming, prompt injection defenses, and model update reviews.
Ethics and Regulation: Implement the EU AI Act with Teeth
Latvia plans to fully implement the EU AI Act, which imposes strict requirements for safety, transparency, and high-risk systems. This is the moment to build capability, not just compliance paperwork. Treat it as an operating system for AI in the public and creative sectors.
- Map high-risk use cases across ministries; assign owners, controls, and audit schedules.
- Stand up a central AI function to support conformity assessments, testing, and incident reporting.
- Embed model cards, data sheets, and evaluation reports into procurement and deployment checklists.
- Provide a regulatory sandbox for cultural and creative industries to de-risk use cases before scale.
EU AI Act overview (European Commission)
Six-Month Action Plan
- Month 1-2: Name data stewards; publish a prioritized dataset roadmap with APIs and licensing. Launch a civil service AI literacy pilot.
- Month 2-3: Open an AI voucher scheme and shared compute program. Start two public sector sandboxes (e.g., content and analytics).
- Month 3-4: Publish the public AI risk register and incident process. Issue procurement clauses covering provenance, testing, and training.
- Month 4-6: Run 5-7 pilots in cultural institutions; release playbooks and reusable code. Begin conformity assessments for identified high-risk systems.
About the Study and Project
The study was prepared within the EU "Erasmus+" program under "Cremel 2.0 - Creative Media Laboratory 2.0," with partners from Latvia, the Basque Country in Spain, Poland, Italy, Hungary, and Estonia. It analyzes how AI changes daily work for creative professionals, the related risks, and the skills needed to adapt. In Latvia, the focus is on cultural use cases, authorship, content reliability, and practical upskilling.
The goal of "Cremel 2.0" is clear: promote ethical and effective AI use in cultural and creative industries through skill development, shared resources, and collaboration. The path forward is execution-open data, train people, fund pilots, codify safeguards, and ship real services.
Your membership also unlocks: