EU DMA Review on AI: Designate Generative AI, Cloud Providers, or Both?

The EU is weighing whether to add generative AI to the DMA or police it through existing services. Expect tighter rules on bundling, data use, and cloud concentration.

Published on: Sep 27, 2025
EU DMA Review on AI: Designate Generative AI, Cloud Providers, or Both?

Will the EU Designate AI Under the Digital Markets Act?

The Digital Markets Act (DMA) has been live since March 6, 2024. The European Commission has designated gatekeepers and 23 core platform services (CPS), including search, operating systems, browsers, messengers, social networks, cloud, ads, and more.

What's missing is an explicit category for generative AI. The Commission is weighing whether to add it-or rely on existing categories to police how AI is embedded into services like search, messaging, and cloud.

Quick context: who and what the DMA covers

Gatekeepers include Alphabet, Amazon, Apple, ByteDance, Meta, Microsoft, and later Booking.com. Covered services include Google Search, Android, iOS, Amazon Marketplace, TikTok, Instagram, Facebook, WhatsApp, and others.

The law defines CPS widely: online intermediation, search engines, social networks, video platforms, messaging, operating systems, browsers, virtual assistants, cloud computing, and online advertising. The Commission can add new CPS categories if needed.

Is AI already covered?

Partly. The Commission has said the DMA applies in two ways: a provider of a CPS that also offers AI features must follow the rules across that CPS; and any AI features embedded in a designated CPS are covered by the obligations.

That means AI-in-search or AI-in-messaging falls under DMA. But standalone AI services that are not already a CPS (for example, a chatbot from a firm with no other CPS) sit outside the current perimeter.

Why the Commission is revisiting this now

In late 2024, the Commission launched work to understand how generative AI affects DMA enforcement and held compliance workshops with gatekeepers on AI integrations. Most attention centered on Article 5(2): the ban on cross-using personal data across CPS without explicit consent.

Gatekeepers argue they comply via opt-outs, exclusions of personal data from training, or de-identification. The workshops highlighted a larger issue: sheer data and distribution advantages embedded in big platforms aren't directly addressed if AI itself isn't a CPS.

The ecosystem advantage problem

Platforms with dominant reach can integrate AI into widely used services, tilting the field. Google Search holds roughly 90% global market share; integrating AI Overviews or "AI Mode" can funnel demand to the same firm's models and tools.

Article 5(8) prohibits forcing users to subscribe to another CPS to use one service. But it applies only across CPS categories. If generative AI is not a CPS, this lever doesn't fully bite against AI bundled into dominant properties.

The first DMA review: AI on the table

The DMA requires a review by May 3, 2026, and every three years. The Commission has already consulted stakeholders, including a dedicated AI questionnaire focused on bottlenecks such as compute, data, cloud access, foundational models, and distribution channels.

Several Member State voices have urged adding a CPS for generative AI and designating key cloud providers under qualitative criteria, given compute's central role in AI.

Option 1: Designate generative AI as a core platform service

Argument for: It brings standalone AI giants into scope and targets the chokepoints where data, distribution, and first-mover edges compound. It would also extend Article 5(8) to AI, forcing real user choice rather than default enrollment into AI features embedded across ecosystems.

Argument against: Most DMA obligations were built for intermediation and distribution services, not foundational models. Article 5(2) is hard to square with how models are trained and updated. By the time a provider is designated, pretraining is done and the data mixing has already happened. Providers might restrict EU features or outsource model development to avoid constraints.

Option 2: Treat AI as an embedded feature and enforce harder

This view says most generative AI is already inside designated CPS-search, assistants, browsers, cloud-so apply Articles 5-7 firmly there. Focus on consent for data use, self-preferencing, ranking favoritism, and fair terms for business users.

Upside: Faster enforcement with rules that exist today. Less legal friction over how to silo data inside model pipelines. Downside: Standalone AI providers with market heft but no CPS escape designation. Ecosystem bundling risks persist if AI is not its own CPS.

Option 3: Go upstream-designate cloud and address compute concentration

AI competition is constrained upstream by compute, data center capacity, networking, and specialized chips. Cloud is already a CPS, which makes designation feasible now. Targeting hyperscaler advantages could curb self-preferencing and data leverage against dependent AI businesses.

This route also supports structural remedies if needed-such as separating cloud from model development, limiting cross-ownership or exclusive partnerships, or restricting acquisitions that cement the stack from chips to apps.

So, will the EU add AI to the DMA?

Three signals matter. First, whether the Commission concludes that existing CPS categories plus tougher enforcement can deliver fair access to inputs, distribution, and data. Second, whether gatekeeper bundling of AI across OS, browsers, and search continues to lock in demand despite Article 5(8). Third, whether upstream cloud concentration keeps tipping the field for model development and deployment.

A practical outcome is an "all of the above" approach: designate key cloud services, enforce embedding rules strictly, and prepare a CPS for generative AI if bundling and market power persist. This preserves optionality while moving on the areas with immediate impact.

What this means for policy teams

  • Map risk: Identify where AI is embedded in designated CPS and where standalone AI sits outside DMA reach.
  • Watch Article 5(2) and 5(8): Consent flows, default enrollments, and cross-service tying will be core enforcement battlegrounds.
  • Prioritize cloud: Assess dependencies on specific hyperscalers and potential exposure to self-preferencing or discriminatory terms.

What this means for IT and developers

  • Data pathways: Separate personal and non-personal data inputs for model features; document consent and opt-out mechanics.
  • Modular integration: Build AI features so they can run as standalone or decoupled modules if required by user-choice rules.
  • Cloud portability: Invest in multi-cloud, open standards, and migration plans to reduce lock-in and switching costs.

What this means for business leaders

  • Scenario plan: One path adds a CPS for AI; the other tightens cloud and CPS-level controls. Either way, bundle-by-default strategies will face scrutiny.
  • Contracts: Revisit cloud agreements for termination terms, data portability, and egress fees that could become enforcement targets.
  • Governance: Stand up an internal checkpoint for DMA compliance across product, data, and legal before shipping new AI integrations.

Actionable steps for the next 6-12 months

  • Inventory AI features embedded in any designated CPS; document consent, data sources, and user choice flows.
  • Prepare for designation of cloud services: track APIs, telemetry, and data use that could be viewed as self-preferencing.
  • Design for optionality: ensure users can access core services without being auto-enrolled in AI features.
  • Benchmark distribution: assess how search, OS, browser, and assistant integrations steer users toward in-house models.

Authoritative resources

Skills and capability building

If you're aligning teams on AI governance, deployment, and compliance-readiness, see practical training by role here: AI courses by job.

What's next

The first DMA review will set the direction: enforce AI as an embedded feature, add a CPS for generative AI, designate cloud-or all three. For governments, IT leaders, and developers, the safest bet is to build for consent, choice, portability, and fair access now.

If AI keeps getting bundled into dominant services without real user choice, expect the Commission to reach for stronger tools. If cloud remains the choke point, designation and structural remedies will move up the agenda.