Best practice unveiled for responsible use of AI in advertising
New voluntary guidelines from the Online Advertising Taskforce set a clear bar for using generative AI in ads-effective, ethical, and trusted. Released on 5 February 2026, the guidance gives brands, agencies, and media owners practical examples and guardrails for real-world use.
AI is already part of day-to-day marketing. In the 2025 Language of Effectiveness survey, 57.5% of respondents said they use AI to generate content and creative. Adoption will grow. The point of this guide: keep trust high while the industry scales AI responsibly.
What's in the guide
The working group included government, industry leaders, and the Advertising Standards Authority (ASA). It builds on ISBA/IPAA principles (2023) and complements UK law, including GDPR, alongside current advertising codes.
You can download the full guidance on the Advertising Association website: Advertising Association's best practice guidance. For codes and rulings, see the ASA.
The eight principles at a glance
- Transparency: Be clear when AI is used and when content is synthetic.
- Responsible use of data: Lawful basis, consent where required, secure handling.
- Preventing bias: Identify and reduce unfair outcomes across audiences.
- Driving oversight: Human review, accountability, and approval paths.
- Promoting societal wellbeing: Avoid harm, misrepresentation, and manipulative experiences.
- Ensuring brand safety: Protect placement, context, and reputation.
- Environmental stewardship: Track and reduce AI-related emissions and compute use.
- Continued monitoring: Audit, test, and improve over time.
Translate the principles into action
- Set a disclosure policy: Label synthetic images, voices, and personas. Add "This ad uses AI-generated content" where material to interpretation.
- Tighten data governance: Maintain a data inventory, confirm lawful basis, and block sensitive categories unless explicitly allowed. Add vendor DPAs and model-specific privacy notes.
- Reduce bias before launch: Use diverse prompts and test sets. Compare outputs across age, gender, ethnicity, and region. Escalate issues to a review panel and document fixes.
- Keep a human in the loop: Define who approves prompts, reviews outputs, and signs off claims. Require legal checks for high-risk claims and regulated sectors.
- Protect people and society: Avoid deceptive deepfakes, risky health/financial claims, or unrealistic body standards. Add realism disclaimers where needed.
- Brand safety and suitability: Use whitelists/blacklists, contextual filters, and UGC screening. Verify training data licensing for assets in your ads.
- Lower the carbon cost: Prefer efficient models, batch runs, and caching. Ask partners for emissions estimates and green compute options.
- Monitor and learn: Track complaints, false positives/negatives, and model drift. Run post-campaign audits and refresh guardrails quarterly.
What marketers should do this week
- Download the guide and share a one-page summary with creative, media, legal, and data teams.
- Appoint an AI lead with clear ownership of policy, tooling, and incident response.
- Run a controlled pilot: one campaign, two AI use cases (e.g., asset variations and media optimization), and a checklist against the eight principles.
- Update contracts and briefs: disclosure, IP and training data warranties, bias testing, safety controls, and environmental reporting.
- Train the team on prompt quality, disclosure standards, and bias checks. Log prompts and outputs for audit.
- Measure trust: add ad accuracy, complaint rate, and disclosure visibility to your KPI stack alongside CPA and ROAS.
Trust is the focus this year for the industry-and rightly so. As the ASA puts it, advertising must remain "legal, decent, honest and truthful," even as AI does more of the work.
If your team needs hands-on upskilling to put this guidance into practice, explore the AI certification for marketing specialists.
Your membership also unlocks: