How creatives can lead a humanistic approach to AI
Every team is poking at generative AI, expecting magic on the first try. Mixed results aren't a failure - they're a sign the tool is still maturing. This is the time to experiment with intent. Keep the human at the center and use AI to stretch thinking, not replace it.
What actually works right now
The strongest AI brand moments start with a human insight. Think of campaigns that use AI to highlight brand meaning, rather than outsourcing the idea to a model. The point isn't "AI made this." It's "AI helped us show you why this matters."
Flip side: asking AI to generate finished creative usually takes heavy human cleanup - and often still feels off. That may improve, but for now, AI is great for exploration and weak at execution. It also stirs anxiety around job loss, especially with AI-generated models. Sensitivity and judgment are part of the brief.
How to use AI now: test, learn, and build confidence
- Form a cross-functional pilot team. Designers, writers, strategists, producers, and devs should track tools, run structured tests, review legal/ethical issues, and collect case studies to guide the org.
- Set clear guidelines. Assign a legal/privacy lead. Map where AI is acceptable, where it's not, and how you'll disclose it. Survey clients and audiences so your use matches their comfort level.
- Approve tools and an experimentation process. Maintain a licensed tool list. Create a simple intake for testing new ones. Expect failures. The goal is skill-building and pattern recognition, not perfection.
- Be transparent with clients. Offer opt-outs. Even AI note-taking can clash with privacy policies. Make disclosure a habit, not a surprise.
- Protect data. Have IT review platform policies. Use enterprise accounts where content stays private. Train teams on what can and cannot be uploaded.
"Faster and cheaper" creative? Not yet
AI can speed up parts of the process, but only if you accept lower standards - which you shouldn't. We've seen models help with image tweaks, visual exploration, and generating dozens of ways to depict a concept. For final packaging art, production files, and nuanced systems work, humans still win by a mile.
For copy, AI is helpful for structure, outlines, headlines, keywords, and cleanup. But the tone of AI-written posts is starting to wear on people. Keep your writing muscles strong. Think of AI as a great cover band - useful, catchy, and never the original artist.
Coding is where AI shines most. It can draft scaffolding, refactor, and spot bugs. You still need the chops to write solid prompts, review outputs, and fix what's broken.
Mind the human connection gap
Chatbots are polite and efficient - and still annoy people. When we know we're not dealing with a person, we respond differently. We see it in drive-through stunts that derail bots, self-driving cars stuck because humans don't yield, and headlines about people hassling robots.
As creatives, we can't ignore that. Don't replace human touchpoints where emotion, nuance, or trust matter. Use AI to support human ideas and make interactions clearer, kinder, and more useful.
A practical playbook for creative teams
- Define AI "jobs to be done." Mood boards, style mashups, alt concepts, visual variations, content outlines, headline lists, code scaffolding, QA checks.
- Create a prompt library. Save what works. Include brand voice, constraints, audience, and success criteria. Version prompts the way you version files.
- Build an AI-first ideation loop. 1) Human brief, 2) AI explore (wide), 3) Human curate, 4) AI refine (narrow), 5) Human craft, 6) Legal/ethics check, 7) QA in real-world context.
- Separate ideation from production. AI for breadth; humans for polish and feasibility. Never ship without human review.
- Label and log. Track where AI touched the work, what model, and settings. This helps with compliance and internal learning.
- Quality gates. Check for bias, accuracy, copyright risk, and consistency with brand standards. If it feels "almost right," it's not ready.
Legal and policy: stay informed
Regulation is moving. Keep a pulse on evolving standards and risk frameworks. A good starting point is the NIST AI Risk Management Framework and the EU's AI Act overview. Build your internal policy to match the strictest markets you serve.
Where to invest your time
- Strategy and insight: Use AI to surface patterns; rely on humans for meaning and choices.
- Concepting: Generate breadth fast; curate hard; push originality yourself.
- Execution: Keep human craftsmanship in the driver's seat. AI assists, you decide.
- Coding and ops: Let AI speed up repetitive work; review every line that ships.
Next steps
- Pick one project this month to run through an AI-assisted process and document the learnings.
- Draft a one-page AI policy, then refine it with legal and client feedback.
- Run a team workshop to build a shared prompt library and define quality gates.
- Level up skills with focused training on prompts, workflow design, and oversight. For curated options by role, see AI courses by job or explore prompt engineering resources.
AI won't replace your taste, instincts, or point of view. It will amplify whatever you bring to it. Lead with humanity, test with rigor, and keep the craft where it belongs - in human hands.
Your membership also unlocks: