Take control of AI on campus: build a secure gateway and appoint an AI Officer

Unmanaged AI use brings uneven results, murky data handling, and rising costs. Fontys ICT's gateway lets schools approve models, route data in EU, and control spend.

Categorized in: AI News Education
Published on: Dec 13, 2025
Take control of AI on campus: build a secure gateway and appoint an AI Officer

Educational institutions must decide which AI they allow

Students have been using tools like ChatGPT and Claude on personal accounts. It created uneven access, inconsistent output, and uncertainty about where data ends up. Fontys ICT tested a different path: a secure AI gateway that gives the institution-not vendors-the final say.

The problem: fragmented AI use on campus

Unmanaged AI use leads to three issues: unclear data handling, inconsistent results between students, and rising costs driven by individual subscriptions. For education leaders, that also means compliance risk under the GDPR and the AI Act.

  • Data location often unknown when students use personal tools
  • Quality varies by model, plan, and hidden settings
  • Budgets drift as individual licenses pile up

Fontys ICT built an internal AI platform to fix this. It centralizes access while letting users choose models transparently-where they run, what they cost, and what limits apply.

The solution: a gateway with full control

The platform uses a gateway architecture. Staff and students select from approved models via a single interface and see key facts upfront: hosting region, approximate cost, and usage rules. Privacy, budgets, and teaching choices stay within institutional control.

  • Model transparency: hosting location, costs, and limits visible before use
  • Budget control: central caps and usage monitoring instead of scattered subscriptions
  • Data locality: requests routed through European data centers to support GDPR and AI Act compliance
  • Policy enforcement: only approved models and features, per user group
  • Auditability: logs to support assessment, integrity, and risk reviews

"Educational institutions must retain their autonomy in a world where developments in the field of AI are accelerating," says Koen Suilen, lecturer and AI expert at Fontys ICT. "With our gateway, we can decide for ourselves which models we offer, under what conditions, and for which users. This is essential for the safe and fair application of AI in education."

Pilot results from Fontys ICT

Over six months, more than 300 users tested the platform. It ran stably without privacy incidents. Students and staff said the clear model info and simple interface helped them use AI more deliberately-and for the right tasks.

The gateway also proved cost-efficient. Central oversight replaced a patchwork of personal licenses. Most AI requests were processed through European data centers, supporting compliance with the GDPR and the AI Act.

"We have seen that technology alone is not enough; you need a governance structure that fits your values as an educational institution," says Ruud Huijts, lecturer and co-author of the white paper. "AI is no longer a support tool, like traditional ICT. AI is strategy. If you take that seriously, you have to take control of the systems you use."

Call for AI leadership: appoint an AI Officer

The white paper argues for a dedicated AI Officer to own governance end to end. Without a clear lead, policies fragment and efforts stall. With the right mandate, this role ensures quality, safety, and continuity.

  • Model curation: approve, retire, and document allowed models and features
  • Data governance: retention, routing, and privacy impact assessments
  • Risk and compliance: align with GDPR/AI Act, ethics policies, and security standards
  • Budget and procurement: centralize spend and monitor usage
  • Curriculum integration: update assignments, rubrics, and assessment methods
  • Faculty and student enablement: training, guidance, and support
  • Monitoring and incident response: logs, audits, and issue handling

As Huijts notes: "Without a clear person in charge, AI governance becomes fragmented. With the AI Officer, institutions can ensure continuity, quality, and safety. This role also combines expertise that is rarely found in one person: technical depth, knowledge of laws and regulations, and an understanding of education, combined with the authority to make strategic choices based on that mix."

How to implement an AI gateway in your institution

  • Map current AI use across programs, tools, and data flows
  • Set policy, risk thresholds, and approved use cases aligned with your values
  • Stand up a gateway: start with a limited model catalog and EU data routing
  • Run a focused pilot (e.g., two programs), measure usage and learning outcomes
  • Centralize budgets and enable cost alerts; replace individual subscriptions
  • Appoint an AI Officer and a cross-functional review board
  • Update assessment policies: allowed models, documentation of prompts, and logging
  • Provide short, practical training for faculty and students

Implications for teaching and assessment

Model choice becomes explicit. Instructors can name allowed models and features, plus required documentation (prompts, outputs, reflection). Fairness improves because everyone sees the same options and limits.

Assessment shifts from "did you use AI?" to "how did you use it?". Logs and artifacts support integrity checks without adding friction. Courses gain consistency while keeping room for experimentation.

Bottom line

Letting every student choose their own AI stack leads to uneven learning and higher risk. A gateway puts your institution back in control-of privacy, costs, and pedagogy-while giving students clear, safe access to the models they need.

If your staff needs structured upskilling to support this shift, explore role-based programs here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide