Research for AI policy design: How the JRC turns science into EU action
The Joint Research Centre (JRC), the European Commission's science and knowledge service, is building the evidence base that makes AI policy work. The goal is clear: help Europe lead in AI while keeping people safe and markets fair.
AI policy is hard. It carries technical risk, ethical questions, and global competition. Science gives policymakers the footing they need - spotting risks early, testing assumptions, and stress-testing rules before they reach the real world.
From strategy to uptake: turning insight into action
With the Apply AI Strategy and the AI in Science Strategy, Europe is pushing practical AI adoption where it matters. JRC research fed the early phases of both, with data on how businesses and public administrations adopt AI, and how AI is shifting skills demand and education offers.
That evidence helps three groups immediately: companies (especially SMEs) that need clear pathways to invest, public servants who need actionable guidance, and educators who must align training with market needs. It also clarifies how AI supports the scientific process, including benefits and risks across disciplines.
Tools that move the needle
- Assessment of European Digital Innovation Hubs to improve services for entrepreneurs and help teams make informed AI investment choices.
- Mapping of AI education offers against labour demand to guide policy and close skill gaps.
- Analyses that help scientists weigh accuracy, bias, safety, and reproducibility when adding AI to their workflows.
Making the AI Act workable
The AI Act needed precision. JRC experts contributed technical definitions - such as AI system versus AI model, general-purpose AI, and generative AI - and analyses on high-risk use cases and transparency. The aim: avoid legal loopholes and keep requirements clear and usable, without being too broad or too narrow.
For context on the policy framework, see the European approach to AI via the AI Office here.
Opening the black box: ECAT and algorithmic transparency
Policy only works if it's enforced. The European Centre for Algorithmic Transparency (ECAT), hosted by the JRC, supports implementation of the Digital Services Act by auditing and testing platform algorithms. The team studies recommender systems, interface design, and AI tools - including their effects on minors' mental and physical health.
This work is already driving concrete changes on major platforms across the EU, from added reporting options for illegal content to clearer controls over personalised feeds. Learn more about ECAT's mission and methods here.
What's next: the JRC Scientific AI Hub
The upcoming JRC Scientific AI Hub will evaluate AI models and systems used in strategic research. It will work closely with policymakers, including the European AI Office, so evidence feeds directly into guidance, standards, and oversight.
The message is simple: keep science embedded in policy design, and policy stays clear, testable, and effective.
Practical takeaways for science and research leaders
- Audit where AI adds measurable value in your lab or department; define metrics before deployment.
- Align training plans with documented skills demand; prioritize data governance, evaluation, and safe model use.
- Engage your local Digital Innovation Hub for assessments, pilots, and vendor-neutral advice.
- Document model choice, data sources, and evaluation results to support compliance and reproducibility.
- Test for bias, safety, and failure modes under stress; set thresholds for intervention or rollback.
- Build user-facing transparency into systems from day one; avoid retrofitting explanations later.
- Track guidance from the AI Office and JRC so your practices stay aligned with upcoming standards.
Your membership also unlocks: