AI in Marketing: The Growing Gap Between Ambition and Action
A recent survey reveals a striking divide in how marketers approach artificial intelligence (AI). While enthusiasm is high, clear policies and practical adoption lag behind. Only about a third of marketers report having official AI policies, with many facing restrictive or nonexistent guidelines.
The 2025 State of AI Marketing Report by Hubspot highlights that just 36% of UK marketers say their companies actively encourage AI use, whereas 31% feel it's only somewhat encouraged. Meanwhile, 20% face restrictive policies and 11% have no policy at all. The top barrier? Over half (53%) cite data privacy and compliance as key concerns.
Excitement vs. Hesitation
Despite these challenges, nearly half (49%) of UK marketers are eager to use AI. Looking ahead, 64% plan to increase AI investment in 2025, and 74% expect most employees will use some form of AI by 2030. Additionally, 69% believe full AI implementation could unlock significant growth.
However, the gap between ambition and action is widening. Chris Camacho, CEO of Cheil UK, points out that brands are aware of AI’s potential but are often paralyzed by uncertainty. Privacy issues, legal grey areas, and lack of internal policies create hesitation. Meanwhile, competitors who experiment and learn early gain advantages.
Sector Differences and Data Privacy
Progress with AI varies by sector. Simon Valcarcel, Marketing Director at Virgin Media O2, explains that their company’s tech background and established data privacy frameworks allow them to move forward cautiously but confidently. They maintain strict controls to ensure AI use respects customer privacy and data security.
Simon notes many marketers face fears about data misuse or exposure because their organizations lack a clear AI strategy. Virgin Media O2 addresses this with a company-wide AI strategy, supported by a centre of excellence and strong governance. This ensures AI exploration happens safely within defined boundaries.
The Role of Agencies and Training
Agencies often find themselves caught between client demand for AI-powered solutions and clients’ internal restrictions. ISBA’s director of agency services, Nick Louisson, observes that legal and ethical uncertainties, especially around IP and copyright, further complicate adoption.
ISBA’s research shows that while some advertisers have policies on responsible AI use, many still lack comprehensive training and clear guidelines. Staff education is crucial to mitigate risks and enable responsible AI adoption. Understanding company strategy and individual roles in AI rollout is a critical first step.
Generative AI: Legal and Ethical Challenges
Generative AI for content creation remains a sensitive area due to unclear legal frameworks. Richard Glasson, global CEO at Hogarth, says clients are cautious, awaiting clearer regulations before fully embracing generative AI in their workflows.
Clients generally have internal guidelines to avoid unethical AI use and negative publicity. A measured approach is the norm for now, with anticipation that clearer rules will soon allow faster adoption.
Creative Boost Without Regulatory Risk
Will Lion, CSO at BBH London, highlights that AI is mostly used internally to enhance creativity at this stage. Public-facing AI-generated content raises regulatory concerns and potential lawsuits. The current environment resembles the early days of digital music, where legal clarity came only after initial disruption.
Production Companies Face Similar Challenges
Production and post-production companies also encounter hesitancy. Adrienn Major, founder of POD LDN, says clients often ask about AI capabilities but are limited by internal policies. POD addresses this through strict privacy and compliance measures, such as avoiding public server processing and ensuring full consent for talent use.
They seek GDPR-aligned setups across regions and emphasize secure environments for testing AI tools.
Agencies as Guides for Safe AI Adoption
Agencies can help clients navigate AI’s uncertainties. Michael Ruby, Chief Creative Officer at Park & Battery, notes that clients expect agencies to both use AI effectively and provide clear guidance on legal and ethical considerations.
He recommends identifying AI risks and obligations, deciding appropriate use cases, and focusing on impactful applications rather than chasing every new tool.
Chris Camacho stresses that agencies must move beyond selling tech solutions to becoming enablers of safe, smart AI adoption. This involves helping brands define ethical guardrails, launch purposeful pilot programs, and build confidence through clarity. Inaction carries its own risk.
Moving Forward: Permission to Act
Ultimately, waiting for perfect conditions isn’t an option. AI will advance regardless of internal delays. The most successful agencies will be those that empower clients to take bold, calculated steps—creating space for progress rather than demanding perfection.
For marketers seeking practical AI training and policy guidance, resources are available to help build skills and confidence. Check out Complete AI Training for courses tailored to marketing professionals.
Your membership also unlocks: