AI ads in 2026: opportunity and risk on the same screen
Media leaders are leaning into AI with eyes wide open. New research from Integral Ad Science (IAS) shows 61% are excited about advertising next to AI-generated content, while 53% say unsuitable adjacencies will be one of the toughest challenges in 2026.
The message is clear: budgets will follow social and video, but quality controls decide who actually wins.
What topped the priority list
- Formats: Digital video (88%), digital display (85%), digital audio (62%).
- Environments: Social media (84%), influencer marketing (61%), video livestreaming (56%).
- Innovation potential: Social (46%), digital video (34%), digital display (32%), influencer marketing (30%).
Brand safety: the growing red line
As AI content floods social video, brand protection moves from nice-to-have to non-negotiable. 87% say creator suitability matters when advertising next to digital video. 83% say rising AI-generated content on social requires monitoring.
Only 2% flat-out refuse to advertise within AI-generated content. Most want control, not avoidance.
Where experts draw the line on AI content
- Inaccurate or hallucinated information (59%).
- Spammy, cluttered experiences (56%).
- Unknown or newly registered domains with no verifiable editorial team (52%).
- Content likely to attract bot traffic (51%).
Attitudes that reveal the split
- 36% are cautious and will take extra precautions before buying AI-generated content adjacencies.
- 46% see unsuitable AI content as a serious threat to media quality.
- 45% will assess AI-generated content the same as any other content type.
- 28% are fine with AI content adjacencies as long as it's safe and suitable.
Channel-specific takeaways you can act on
Social media
- Top priority, but also the most challenged: 52% say social will face serious issues in the next 12 months.
- Influencer marketing is rising fast: 78% say it's increasingly important; 82% will scrutinize influencer suitability.
- Measurement matters: 85% value viewability; 77% value attention; 70% say poor transparency will reduce spend.
- Use platform tools and third-party controls. Comment controls and pre-bid filters now exist and should be standard in every plan.
Digital video
- Programmatic video spend will grow with social video consumption (84% agree).
- Brand safety will be a bigger concern as volume rises (83%).
- Third-party tools are essential to avoid deepfake adjacencies (82%). Require content-level analysis, not just domain checks.
Connected TV (CTV)
- Ad fraud risk is rising with inventory expansion (83%).
- Brand safety can degrade as the number of sellers increases (83%).
- Viewability risk is real: 75% say CTV ads are vulnerable to low viewability.
- Push for app-level and content-level transparency. Adopt pre-bid fraud filters and post-bid verification.
Retail media
- Media quality is fundamental: ad fraud (83%) and viewability (83%) top performance evaluation.
- Brand suitability is core to performance, not a side check.
- AI content risk is creeping in: 81% say rising AI-generated content requires monitoring, including influencer content (82%).
The measurement reset: content-level or bust
The Media Rating Council tightened the rules in Oct 2025: if a verification service claims brand safety, it must analyze images, video, and audio at the content level. Property-level checks no longer qualify.
That aligns with how platforms and verification providers are building controls for Facebook, Instagram, TikTok, and more. For buyers, this shifts the RFP question from "Do you have brand safety?" to "Show me your content-level coverage by platform, format, and language."
IAB Tech Lab is also mapping AI use cases across planning, creative, buying, and measurement-useful context as you evaluate vendor claims.
Why this is happening
- AI production scaled across 2025, accelerating content volume and variability.
- ML classification now has to parse frames, audio, and text signals at bid-speed across billions of impressions.
- Agentic AI hit ad platforms, changing how campaigns are planned and optimized.
The 2026 media quality playbook
- Adopt a clear adjacency policy for AI content: Define allowed, restricted, and blocked AI content types. Treat hallucinations, deepfakes, and spammy UX as hard stops.
- Demand content-level controls: Pre-bid exclusions plus post-bid verification for social, video, and CTV. Ask for coverage by placement, language, and media type.
- Creator vetting: Require identity, editorial ownership, and brand suitability checks for influencers and creators. Reassess quarterly.
- MFA and fraud enforcement: Use curated PMPs and blocklists. Monitor invalid traffic and MFA indicators weekly; escalate to supply partners.
- Attention + viewability: Optimize beyond viewability. Track attention time, scroll depth, and completion rates. Tie attention to outcomes where possible.
- AI disclosure and labeling: Ask platforms and partners to flag AI-generated or AI-edited content. Apply different thresholds for news vs. entertainment vs. UGC.
- Creative suitability mirroring: Match your creative tone to the content grade you're buying. Reduce risk by aligning messaging and sensitivity settings.
- Retail media standards: Push for consistent fraud, viewability, and attribution reporting across networks. Avoid apples-to-oranges dashboards.
- CTV deal hygiene: Prefer app-direct deals, verified supply paths, and transparent sellers. Audit seller lists and exclude spoof-prone inventory.
- Contingency rules: If brand safety transparency drops, auto-pause or shift to trusted lists. Codify this in IOs and platform rules.
- Legal and comms sync: Pre-approve response protocols for deepfake or misinformation incidents.
- Quarterly reviews: Revisit thresholds, blocklists, and creator rosters as AI content patterns shift.
Budgets will still grow-controls will decide performance
Global ad spend is projected to rise in 2026, with social media reaching an estimated $306.4B and growing 14.9%. Investment is not slowing. It's moving toward social video, creators, and AI-assisted media buying.
The teams that win will pair that growth with firm quality guardrails: content-level verification, creator suitability, stricter MFA filtering, and attention-based optimization.
Quick stats to brief your leadership
- 61% excited about AI-generated content opportunities; 53% say unsuitable adjacencies are a top 2026 challenge.
- 87% care about creator suitability in digital video; 83% say AI-generated social content needs monitoring.
- Only 2% won't consider AI content adjacencies at all.
- CTV risks: 83% fraud concern, 83% brand safety concern, 75% low viewability vulnerability.
- Retail media: 83% prioritize fraud and viewability; 81% worry about AI content growth on networks.
What to update this quarter
- RFP and partner criteria: Content-level analysis required. Share proof points and platform coverage.
- Platform settings: Turn on pre-bid controls for social video and CTV. Enforce comment controls where available.
- Supply strategy: Migrate spend to curated supply with transparent seller lists. Reduce unknown domains.
- Creator program: Implement suitability scoring, usage rights, and crisis clauses.
- Measurement stack: Combine viewability, attention, fraud, and suitability metrics. Use them to actually throttle spend.
Timeline signals
- Oct 2025: IAS fielded the survey with YouGov; MRC tightened brand safety qualification to content-level analysis.
- Nov 2025: Industry studies flagged rising suitability concerns and a gap between AI adoption plans and confidence.
- Dec 8, 2025: IAS published the 2026 Industry Pulse Report.
- 2026: Social media ad spend projected to hit $306.4B.
Useful resources
- Media Rating Council - standards and guidance on brand safety and measurement.
- IAB Tech Lab - AI use cases and specs influencing platform and vendor roadmaps.
Upskill your team
If you're building AI guardrails for media and creative, structured training helps speed up the rollout and avoid common pitfalls.
The takeaway: lean into social video and creator-led formats, but tighten control at the content level. AI can scale distribution and outcomes, but only if your safety, suitability, and measurement rules scale with it.
Your membership also unlocks: