AI in marketing and ethics: A look at business, corporate, and consumer perspectives
Enhanced imagery isn't new. Food stylists used tricks in the 70s (think dish soap for extra beer foam), Photoshop took it further, and now generative AI sits in the spotlight.
At Haymarket's AI Deciphered conference, Julia Walker joined Jason Klein of SeeMe Index and Tristen Norman of Getty Images to cut through the noise. The theme was simple: AI can help, but trust is the real asset-and it's easy to lose.
Labeling isn't optional
Tristen Norman made it plain: disclose AI use. People deserve to know what they're looking at, whether they're experts or casual scrollers. Clear labels reduce confusion, limit outrage, and protect your brand when creative gets ambitious.
If you want a practical system, look at content credentials and provenance standards from the C2PA. Labeling doesn't kill creativity-it keeps your audience on your side.
The business side: categories, rules, and context
Not every brand can use AI in the same way. Healthcare, financial services, beauty, and skincare face tighter guardrails. Political advertising can be even stricter; some U.S. states limit or ban AI-generated content in campaign materials.
If you advertise in election cycles or regulated categories, check the rules early. The National Conference of State Legislatures tracks developments here: AI and elections policy.
Context matters, too. AI for entertainment is easier to accept-absurd videos and obvious fakes signal themselves. The trouble starts when AI blurs with reality, especially around people, health, or money.
Context and brand risk
Even "ethical" use can spark backlash if it clashes with audience expectations. Coca-Cola felt the heat for AI-made holiday ads-twice. The message: don't use AI for the headline; use it to make the experience better.
Jason Klein put it well: you can harm long-term brand equity by using AI just to say you used it. Focus it on clarity, education, or utility. Make your product easier to understand or your message easier to trust.
The corporate side: tie AI to your governance
Good AI practice shouldn't live in a separate silo. Klein's advice: link AI governance to the corporate governance you already have. Your principles don't change just because the tool did.
Example: SeeMe Index reviewed L'OrΓ©al, which anchors decisions to four pillars-transparency, respect, integrity, and courage. Through that lens, the company chose not to use generative AI to create human faces. It's a clear line that protects models, aligns with values, and gives teams an easy rule to follow.
The consumer side: acceptance is up, authenticity is down
Getty Images has surveyed consumers for years. People are more comfortable seeing AI in feeds and ads now. Exposure helps.
But there's still a gap: 78% don't see AI-generated images as authentic. So yes, you might get reach and clicks, but trust is still at stake. Treat AI like a visual effect, not a substitute for reality-especially with people.
A practical playbook for marketers
- Set a disclosure standard: consistent labels on AI-assisted and AI-generated assets. Use content credentials (see C2PA).
- Draw clear red lines: e.g., no synthetic human faces, mandatory consent for likeness, no AI in testimonials, no AI in before/after claims.
- Tier risk by use case: low stakes (backgrounds, alt crops), medium (product renders), high (people, health, finance, politics).
- Run legal and regulatory checks early for healthcare, finance, beauty, and political markets.
- Use licensed inputs and tools with rights protection and provenance. Keep audit trails.
- Protect representation: review for bias and stereotyping; stress-test inclusivity. Tools like SeeMe Index can help evaluate identity and inclusion.
- Institute human review: creative, legal, and brand teams sign off on every AI asset in high-risk categories.
- Measure sentiment, not just CTR: track trust, clarity, and brand lift alongside performance.
- Have a crisis plan: if backlash hits, pause distribution, explain the choices, and show your policy in action.
- Vet vendors: seek indemnification, content credentials, watermarking options, and logs.
- Train your team: ethics, prompts, disclosure, and QA. Make it part of onboarding and refresh quarterly.
Pre-flight checklist for any AI asset
- Purpose: Why use AI here? What problem does it actually solve?
- Truth: Could this mislead a reasonable person?
- Disclosure: Is the label clear, visible, and consistent across channels?
- People: Do we have consent for any likenesses? Are portrayals fair and representative?
- Rights: Are training data, references, and outputs properly licensed?
- Review: Did creative, legal, and brand approve? Is there a provenance record?
- Plan B: If sentiment turns, what do we pause, and what do we say?
- Measurement: What will we learn, and how will we improve the next round?
AI can speed production and expand creative range. But trust is the constraint, and it compounds. Lead with disclosure, align with your values, and use the tech to make the customer's experience clearer and more useful.
If your marketing team needs structured, ethics-first AI training and templates, explore our AI Certification for Marketing Specialists. You can also browse role-based options here: AI courses by job.
Your membership also unlocks: