EU plan would delay high-risk AI to 2027, narrow personal data, and could end cookie banners

EU may push high-risk AI rules to 2027 and narrow privacy scope, tilting to opt-out over consent. Legal teams should prep LIAs, rethink cookies, and shore up re-ID controls.

Categorized in: AI News Legal
Published on: Nov 17, 2025
EU plan would delay high-risk AI to 2027, narrow personal data, and could end cookie banners

EU signals delay on high-risk AI rules and a privacy reset: what legal teams should prepare for

The European Commission is teeing up reforms that could push parts of the AI Act's high-risk obligations out to 2027 and recast core privacy concepts. The intent is flexibility for AI development; the risk is thinner guardrails in the interim. Expect hard questions from lawmakers and a split response from industry and civil society.

Timeline and scope

The Commission plans a one-year pause on several high-risk AI requirements, with enforcement targeted for 2027. A formal proposal is expected on November 19, followed by the ordinary legislative process with Parliament and member states. Until adopted, nothing is final-but forward planning now will save remediation sprints later.

Narrowing "personal data" and leaning on legitimate interest

A narrower personal data definition is on the table. That could move certain pseudonymous identifiers-cookies, advertising IDs-outside personal data, enabling broader AI training and analytics without consent. Companies could rely more on GDPR's legitimate interest, provided the balancing test shows business needs don't override individual rights.

This would ease model training and profiling use cases, but it heightens risk around fairness, transparency, and re-identification. Expect privacy advocates and DPAs to challenge weak balancing tests, especially where sensitive inferences are possible.

Cookie consent banners may go-opt-out takes the front seat

The Commission is weighing removal of mandatory cookie banners. Instead of opt-in, users could object after collection begins. Media and advertisers may keep consent for personalized ads via carve-outs, while others pivot to contextual ads or post-collection objection workflows.

If adopted, consent management platforms won't disappear; they'll evolve into objection and preference tools. Your legal posture will shift from "prove consent" to "prove necessity, proportionality, and effective objection handling."

Operational impacts you should model now

  • Data taxonomy and inventories: Re-label identifiers that could move out of "personal data"; keep a re-identification risk register.
  • Legitimate interest assessments (LIAs): Build templates for AI training, analytics, and measurement. Include risk mitigations and rights impact scoring.
  • DPIAs: Update thresholds for high-risk AI pilots paused under the new timeline; document interim controls.
  • Transparency: Redraft notices to explain legitimate interest uses, objection routes, and profiling logic in plain language.
  • Record-keeping (ROPA): Reflect purpose shifts from consent to legitimate interest and attach LIAs and DPIAs.
  • Vendor contracts: Refresh DPAs and controller/processor allocations for AI training data, model fine-tuning, and telemetry.
  • Cookie/measurement stack: Prepare for an objection-first model; preserve consent flows for personalized ads where required.
  • Retention and minimization: Define strict retention for pseudonymous data, with periodic aggregation or deletion checkpoints.
  • Children and vulnerable groups: Apply higher thresholds; default away from legitimate interest where harm risk is non-trivial.
  • Enforcement risk: Anticipate DPA scrutiny on re-identification, sensitive inferences, and poor opt-out UX.

AI program considerations

  • Training data provenance: Track sources, licenses, and exclusion lists; separate personal, pseudonymous, and non-personal sets.
  • Model governance: Define evaluation gates for bias, security, and rights impact before and after the 2027 go-live.
  • Human oversight: Document escalation paths for high-impact decisions even if certain rules are delayed.

What to do before the proposal lands

  • Run a gap analysis on consent-based processing that could transition to legitimate interest; identify high-risk edge cases to keep on consent.
  • Prototype an objection workflow: one-click UI, API endpoints, and suppression logic across ads, analytics, and model training.
  • Create a short-form LIA library with standard mitigations (purpose limitation, noise/aggregation, retention caps, user controls).
  • Stress-test your re-identification safeguards for cookies and advertising IDs; document residual risk.
  • Align with marketing and product on contextual targeting and measurement alternatives if personalized ads stay consent-gated.

Open issues to watch

  • Exact scope of identifiers excluded from personal data-and how "pseudonymous" is evidenced.
  • EDPB guidance on legitimate interest for AI training and profiling.
  • Interplay with the ePrivacy rules for electronic communications and tracking technologies.
  • Member-state variations and DPA enforcement priorities during the 2025-2027 window.

Key references

Bottom line: you may get time on high-risk AI, but scrutiny won't fade. Use the runway to harden assessments, clean your data foundations, and make objection handling effortless.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)