The AI Centre for the Empowerment of Human Learning has opened
A new centre at SLATE (Centre for the Science of Learning & Technology) at the University of Bergen is now open, with a sharp focus: use AI to make learning more effective, more equitable, and more trusted. This is good news for educators, developers, and researchers who want evidence, not hype.
The centre sits at the intersection of pedagogy, data, and software. Expect practical research, tested tools, and clear guidelines that help real classrooms and learning platforms perform better.
Why this matters for education, IT, and research
- Human-centered learning tools: Building AI tutors, feedback systems, and assistants that keep teachers in control and students at the center.
- Learning analytics that serve instruction: Actionable dashboards and early signals that reduce teacher workload while protecting privacy.
- Assessment that is fair and useful: Feedback-first formative assessment, clear use of AI in grading support, and transparent policies for academic integrity.
- AI literacy for all: Practical training for educators, students, and administrators-what to use, where it fails, and how to evaluate results.
- Interdisciplinary labs: Educators, data scientists, designers, and policy experts working together on pilots that can be replicated.
- Open methods: Reproducible studies, shared protocols, and reference implementations that others can adopt.
Practical opportunities you can act on
- Schools and universities: Pilot AI-supported feedback in a single course. Define success metrics (time saved, engagement, attainment), then expand with a governance plan.
- Developers: Co-design with teachers and students. Match open standards (xAPI, LTI), publish model cards, and run A/B tests with ethical review.
- Researchers: Partner on mixed-methods studies, share datasets with consent, and prioritize replication to validate findings.
- Administrators and policymakers: Use procurement checklists, document risk and bias reviews, and require clear data retention policies.
Guardrails from day one
Responsible AI in education starts with clear risk controls, privacy by design, and transparency about model limits. Two helpful reference points are the NIST AI Risk Management Framework and UNESCO's guidance on Generative AI in education and research.
What success looks like
- Time back to teaching: Fewer hours on grading and admin, more on feedback and student support.
- Better learning outcomes: Higher course completion, stronger formative assessments, and clearer evidence of progress.
- Equity and access: Tools that support diverse learners, multilingual feedback, and inclusive design.
- Trust and clarity: Transparent data practices, explainable features, and straightforward policies on AI use.
- Reusable assets: Open rubrics, prompt libraries, and playbooks others can adopt without starting from scratch.
How to engage
- Propose a small, high-impact pilot with clear outcomes and a short feedback cycle.
- Join working groups that pair teachers, engineers, and researchers on the same problem.
- Share results-what worked, what did not, and what to adjust next.
Skill up your team
If you're building capacity in your department or product team, explore focused upskilling by job role here: AI courses by job. For a broader view of new and emerging options, see the latest programs: Latest AI courses.
The AI Centre for the Empowerment of Human Learning is open. Expect practical experiments, strong evaluation, and tools that make teaching and learning work better in real settings.
Your membership also unlocks: