48 Hours at DMEXCO 2025: AI Everywhere-and Nowhere
AI dominated DMEXCO 2025-loud branding, thin case studies. Focus on workable ops, policy and eval loops; track cycle time, approval rate, and proven lift.

'AI was everywhere, and yet, nowhere': What marketers took from DMEXCO 2025
Cologne was loud about AI. Booths, stages, and badges said "AI." But when you asked for live case studies with real KPIs, the room got quieter.
That tension defined DMEXCO 2025: clear curiosity, scattered execution. Here's what actually mattered for marketing teams-and what to do next.
Where AI showed promise
- Creative ops moved from talk to tooling. Templated production, brand guardrails, and versioning at scale are finally usable. The winning demos showed human-in-the-loop workflows, not magic buttons.
- Retail media matured. Better measurement packages and more consistent taxonomy. Still patchy across networks, but buyers are pushing for comparability.
- CTV got performance-minded. More incrementality tests and attention data in the same deck as CPA/CAC. Less sizzle, more attribution.
- Agent copilots entered ad ops. Routine trafficking, QA, and pacing checks are ripe for semi-automation. The best cases combined rules, logs, and human sign-off.
Where AI fell short
- Thin case studies. Many logos, few lift charts. Marketers asked for baseline metrics, sample sizes, and holdouts. Most vendors weren't ready.
- Data provenance and rights. Teams want clarity on training sources, licensing, and how models handle brand IP. Answers varied widely.
- Evaluation. Everyone wants quality. Few showed repeatable eval frameworks for hallucination rate, bias, and latency under load.
- Team skills. Tools outpaced training. Ops, creative, and analytics need shared playbooks to avoid random pilots.
Privacy and policy stayed front and center
- Cookie deprecation pressure pushed first-party data and clean room talk. Keep an eye on Privacy Sandbox timelines and test plans.
- AI governance moved from slides to checklists. Expect more procurement questions on risk, audits, and model lineage under the EU AI Act.
The marketer's playbook: 90 days
1) Set policy before pilots
- Data rules: What data can feed models? What stays out (PII, client secrets)? Who approves?
- Model guardrails: Approved providers, logging, retention, and human review points.
- Evaluation: Define pass/fail for quality, tone, and factual accuracy per use case.
2) Start with high-leverage, low-risk use cases
- Creative production: Resize, transcreate, and CTA varianting with brand style checks.
- Search and social ops: Keyword expansion, negative list upkeep, and bulk ad copy drafts.
- Content repurposing: Turn webinars into briefs, posts, and snippets with source citations.
- Support for analysts: Query drafting, QA checks, and anomaly summaries-not final decisions.
3) Build a simple eval loop
- Measure: time-to-first-draft, approval rate, error rate, and uplift on CTR/CVR.
- Sample weekly: Human review of 30-50 outputs; flag issues; retrain prompts or rules.
- Log everything: Inputs, outputs, decisions. You'll need this for audits and learning.
4) Tighten your data advantage
- First-party capture: Clean forms, value exchanges, and consent records.
- Content system: Centralized taxonomy for products, claims, and proof. Use retrieval (RAG) so models quote your approved sources, not guesses.
- Retail and CTV: Standardize naming so performance rolls up across partners.
Buy vs. build: a practical split
- Buy for production use (brand guardrails, rights, support, SLAs). You're paying for reliability and risk reduction.
- Light build for internal knowledge tasks (RAG over your wiki, briefing tools). Keep it modular so you can swap models.
What to demand from vendors
- Clear training sources and IP stance
- Metrics: lift vs. control, not vanity stats
- Latency and cost at your expected scale
- Audit logs, SOC2/ISO status, and retention policy
- Human override and version control
Metrics that actually matter
- Cycle time: Brief → first draft → approved asset
- Approval rate: Drafts accepted without major edits
- Content hit rate: % of variants beating control
- Error and rework: Brand or factual fixes per 100 outputs
- Unit economics: Cost per asset, cost per test, MER impact
Signals to watch through 2025
- Interoperability: Can your stack pass briefs, assets, and approvals between tools without manual copy-paste?
- Search changes: How AI answers affect branded and non-brand queries; shift budgets based on share-of-answers, not just share-of-voice.
- Synthetic data: Useful for training and QA-dangerous for targeting. Keep humans in loop.
- Attention as a bridge metric: Use it to predict lift, not replace sales outcomes.
Bottom line
AI isn't a silver bullet. It's a set of levers. DMEXCO showed plenty of pitch decks-but the teams winning are shrinking cycle time, improving approval rates, and proving lift with clean experiments.
Pick three use cases. Set guardrails. Measure weekly. Ship.
If you're building skills across your team, explore practical programs for marketers here: AI Certification for Marketing Specialists.