Portugal Designates Anacom as AI Act Watchdog to Balance Innovation and Rights
Portugal must move fast on AI while protecting rights under the EU AI Act. Anacom to enforce; high-risk rules apply in 2026-use 2025 to make systems explainable.

Portugal's AI Crossroads: Build Fast, Protect Rights
Portugal's communications watchdog says the hard part isn't adopting AI-it's balancing speed of development with fundamental rights. The European Union's AI Act sets that bar, and enforcement is coming fast.
Portugal is preparing to implement the rules, with Anacom named as the supervisory authority and national contact point. High-risk AI obligations start applying in 2026, so 2025 is your window to get production systems ready.
Key takeaways for IT and engineering teams
- The priority: enable AI innovation while protecting fundamental rights (users and companies alike).
- Explainability is central-AI becomes a problem the moment we can't understand it.
- Avoid over-regulation that stalls development, but expect clear lines on what's prohibited vs. allowed.
- Regulators will weigh and apply rules in practice-implementation details matter.
- Anacom will guide and supervise; expect guidelines that clarify expectations and timelines.
- High-risk AI systems face the strictest requirements starting in 2026. Prepare now.
Explainability: make it real, not aspirational
Black-box models won't cut it for sensitive use cases. You'll need traceability, rationale, and user-facing clarity proportional to the risk of your system.
Plan for model choice plus supporting controls: interpretable features, decision logs, model cards, and testable policies that engineers and auditors can verify.
Action plan for 2025-2026
- Map AI use across products and internal tools; classify by risk under the AI Act.
- Stand up an AI risk register with owners, data sources, and model lifecycle states.
- Implement data governance: provenance, consent basis, bias checks, and retention.
- Build explainability into the stack: feature importance, rationale summaries, and user notices where relevant.
- Add human oversight where outcomes materially affect people (review, override, appeals).
- Strengthen security: model/feature store access controls, secret rotation, prompt/input hardening, and supply-chain checks.
- Instrument continuous monitoring: drift, performance, bias, safety incidents, and automated alerts.
- Document everything: training data summaries, model cards, evaluation protocols, and decision logs.
- Vendor management: contractually require compliance, transparency, uptime/SLA, and incident reporting.
- Run red-teaming and abuse testing; keep evidence and remediation notes.
- Prepare incident response playbooks for model failures, bias findings, and data leaks.
- Track Anacom guidance; align internal standards as they publish clarifications.
High-risk systems: what to expect
- Risk management system across the full lifecycle (design to post-deployment).
- Data quality controls: representativeness, relevance, and bias mitigation.
- Technical documentation and logs that auditors can review.
- Post-market monitoring and reporting for serious incidents.
- Clear human oversight procedures with defined checkpoints.
- Accuracy, robustness, and cybersecurity benchmarks appropriate to context.
- Conformity processes before placing on the market and updates when models change materially.
What this means for teams in Portugal
Expect more clarity from Anacom on scope, templates, and enforcement priorities. The market wants legal certainty, and that's the direction set by policymakers.
Use this runway to upgrade your AI governance and documentation. It's less painful-and cheaper-than scrambling after audits begin.
Helpful resources
- European Commission: EU approach to AI (AI Act overview)
- EUR-Lex: EU law database (search "Artificial Intelligence Act")
- Complete AI Training: courses by job role
Bottom line
Build AI that teams can explain, monitor, and correct. That's how you move fast without breaking people's rights-and stay ready for 2026.