EU AI Act Preparatory Obligations Are Live: What It Means for Risk, Governance and Leadership
Preparatory obligations under the EU AI Act have taken effect. The message is clear: AI is moving from ambition to enforcement. The question isn't "can we build it?"-it's "can we manage it responsibly at scale?"
This shift hits executive priorities directly: enterprise risk, supply-chain accountability, explainability, governance, and leadership judgment. Treat it as infrastructure, not a side project.
Innovation Becomes Infrastructure
The new obligations push AI from experiments to regulated infrastructure. "It signals a move to a clear, risk-based framework, which forces organisations to understand not just what their AI systems do, but how and why they are used," says Ian Murrin, CEO of Digiterre.
The scope extends to suppliers and third-party models. "Organisations will be accountable not just for the AI they build, but for third-party models and tooling they rely on⦠With the EU AI Act's toughest requirements kicking in by August, firms will be forced to show exactly how their models work, where their data comes from, and who is accountable when things go wrong," he adds.
Compliance Is Now an Operational Discipline
The impact shows up in day-to-day operations before it shows up in strategy decks. AI Empowerment Specialist Fahed Bizzari is blunt: "Companies will need an inventory of AI use cases, a clear view of which ones are higher risk and named accountability for decisions that involve AI."
Procurement tightens. Buyers need documentation, traceability, and warranties from vendors-especially in hiring, performance, customer decisions, and safety-critical areas. "The organisations that win will treat compliance as infrastructure: map use cases, set simple internal rules, train teams, tighten supplier expectations and add lightweight monitoring so governance becomes routine rather than reactive."
What To Stand Up First (No Drama, Just Discipline)
- AI use-case inventory with risk tiering and business owners.
- RACI for AI decisions: who designs, approves, monitors, and overrides.
- Procurement guardrails: model cards, data lineage, eval results, incident history, and audit rights from vendors.
- Human-in-the-loop criteria for higher-risk decisions and clear escalation paths.
- Documentation basics: purpose, data sources, known limitations, monitoring plan, and sign-offs.
- Change management: versioning, approvals for updates, rollback plan.
- Telemetry: outcome quality, bias flags, drift signals, user overrides, and incident logs.
- Board-level reporting: risk posture, incidents, and remediation progress.
AI Literacy Is Now a Leadership Requirement
"Regulation alone isn't the hard part. The real challenge is literacy," says Cien Solon, CEO of LaunchLemonade. Many teams can't yet evaluate model behaviour, document decisions, or build guardrails that hold up under scrutiny.
Solon's warning: don't ship what you don't understand. Training has to move in lockstep with deployment. "If we want AI to be trusted, we need to trust our team's understanding of the AI tools they're using." For executives, that means making AI literacy part of management expectations, not a side course.
For deeper executive guidance on scaling AI with governance, see AI for Executives & Strategy. For teams building the compliance muscle, the AI Learning Path for Regulatory Affairs Specialists can help operationalise risk monitoring, documentation, and auditability.
The Bar on Explainability and Trust Just Went Up
Standards are tightening, and they're measurable. "High-risk AI demands a higher bar. Every AI-driven outcome should be explainable in human terms, traceable across its lifecycle, and auditable by both internal teams and external stakeholders," says Seb Kirk, CEO of GaiaLens.
The fastest movers build governance into design, not as an afterthought. That means explainability methods are chosen early, data lineage is captured by default, and audit logs aren't optional. Do it right and you speed up approvals, reduce rework, and keep trust intact.
- Model cards and decision logs attached to each use case.
- Dataset lineage and consent posture tracked and reviewable.
- Bias and performance evaluations before and after deployment.
- Clear user disclosures and human override for higher-risk contexts.
- Independent review or audit mechanism for material use cases.
For official guidance, review the European Commission's overview of the EU AI Act.
Leadership, Not Technology, Is the Deciding Factor
The Act raises a bigger question: should you deploy, not just can you. Self-leadership expert Andrew Bryant notes the pressure on values-based judgment: leaders must weigh human impact, not just efficiency. That takes clarity on risk appetite, principles, and accountability.
Put guardrails in plain language: where AI is allowed, where it's restricted, and who signs off exceptions. Then align incentives with those rules. If speed-to-market gets rewarded and risk controls are optional, you'll get the wrong behaviour.
Your 90-Day Executive Plan
- Week 1-2: Publish an AI use-policy and accountability model. Name the executive owner and the cross-functional review group.
- Week 2-4: Complete a use-case inventory with risk tiers. Freeze new high-risk deployments until controls are in place.
- Week 3-6: Stand up supplier requirements: documentation pack, evaluation results, security posture, and audit rights. Update contracts.
- Week 4-8: Implement explainability, data lineage capture, and logging for high-risk and material use cases.
- Week 6-10: Train managers and product owners on risk cues, escalation paths, and human oversight. Track completion.
- Week 8-12: Run a live test: incident simulation, override exercise, and rollback drill. Report outcomes to the board.
By August, "Good" Looks Like This
- Complete inventory of AI systems with risk classification and owners.
- Documented purposes, data sources, known limitations, and monitoring plans.
- Supplier attestations and contract clauses covering transparency, audit, and incident response.
- Explainability and traceability built into design for higher-risk use cases.
- Human-in-the-loop controls where people's rights, access, or safety are affected.
- Operational monitoring: bias, drift, performance, overrides, and incident logs.
- Board reporting cadence on AI risk, incidents, and remediation.
The Upside: Compliance as a Competitive Advantage
If you treat these obligations as a checkbox, they'll slow you down. If you treat them as operating discipline, they make scaling easier: fewer surprises, faster approvals, and stronger trust with customers and regulators.
The cost of delay is hidden risk and rework. The payoff for doing this now is speed with safety-and a brand that people believe in.
Your membership also unlocks: