'He comes from our side': Union CEO calls Solomon an AI 'cheerleader,' urges a slower rollout
A union leader just called Solomon an AI "cheerleader" and asked him to slow down the strategy. That's a signal. Workers want a seat at the table, not a post-hoc memo. Executives want momentum without a public fight or legal risk.
Here's how to cool the temperature, keep progress, and earn trust from the people who actually make the work.
What this moment means for executives and writers
- Speed vs. stability: Rapid AI rollout without policy invites revolt, rework, and PR blowback.
- IP and consent are non-negotiable: Datasets, training sources, and attribution must be clear.
- Jobs and credit: Writers need clarity on roles, authorship, and pay when AI is in the loop.
- Reputation risk: "AI cheerleader" sticks if leadership skips governance and dialogue.
A 90-day "slowdown without stalling" plan
Pause performative launches. Keep controlled pilots. Build guardrails in parallel.
- Weeks 1-2: Stabilize
- Freeze net-new AI deployments. Keep only in-flight pilots with risk sign-off.
- Publish 1-page AI principles: consent, attribution, safety, and accountability.
- Inventory all AI use (tools, prompts, data sources). Produce a risk heatmap.
- Form a cross-functional council with writer reps, legal, security, and product.
- Weeks 3-4: Pilot with guardrails
- Pick 2-3 low-risk pilots (e.g., internal summarization, research assistance).
- Define success metrics: time saved, error rate, revision cycles, satisfaction.
- Set human-in-the-loop checkpoints. Require source citations for factual claims.
- Run basic red-teaming for bias, plagiarism, and hallucination.
- Weeks 5-8: Contracts and data policy
- Require dataset provenance from vendors. Block training on member work without consent.
- Update writer contracts: usage boundaries, credit rules, opt-out, and compensation models.
- Adopt disclosure standards: when AI assists, say so. Define acceptable byline practices.
- Add watermarking or logging to trace AI-assisted outputs across the workflow.
- Weeks 9-12: Training and communication
- Run training for editors and writers on prompt hygiene, source checks, and handoffs.
- Hold monthly town halls with the council. Publish pilot dashboards and lessons learned.
- Share an external statement with clear commitments and pilot scope - no hype.
Clear guardrails reduce conflict
- Allowed (with human review): Ideation prompts, style suggestions, grammar, outline drafts, internal summarization of company-owned material.
- Restricted: Factual claims without citations, sensitive topics, legal/financial advice, personalization using private user data.
- Prohibited: Training on member or third-party content without consent, auto-bylines, ghostwriting to replace contracted work.
Metrics that prove you're responsible
- Time saved per deliverable and revision count per draft.
- Fact error rate and correction turnaround time.
- Plagiarism/IP incidents and remediation speed.
- Writer satisfaction and opt-in rates for AI-assisted tasks.
- Approval SLAs from legal/compliance on AI-touched work.
Contract language starters (run through Legal)
- Dataset provenance: "Vendor warrants training data excludes member or third-party content without documented consent."
- Consent and compensation: "Use of a writer's work for model training requires prior written consent and negotiated compensation."
- Authorship and credit: "AI output is a tool; human creator retains authorship where material human contribution exists."
- Disclosure: "Publisher will disclose material AI assistance in published works per house style."
For union leaders and writers: make the ask concrete
- Seat at the governance council with veto on prohibited use cases.
- Public commitment to consent, credit, and compensation for training and outputs.
- Shared dashboard of pilots, metrics, and incident reports.
- Clear escalation path for disputed AI use and rapid takedown rights.
For AI champions like Solomon: trade cheerleading for strategy
- Lead with risk metrics and governance, not model features.
- Co-own pilots with editorial leads; let writers demo wins.
- Over-communicate limitations; publish what AI will not do here.
- Tie outcomes to business goals writers care about: fewer edits, faster approvals, better briefs.
Helpful references
- NIST AI Risk Management Framework - a practical structure for risk, controls, and evaluation.
- WGA AI provisions (2023 MBA summary) - a baseline for consent, credit, and compensation discussions.
Next steps
- Adopt the 90-day plan and name accountable owners this week.
- Publish principles, stand up the council, and announce pilot scope in one memo.
- Make metrics public internally. Let the data cool the debate.
Want playbooks and tools specific to your role? Explore AI for Writers and AI for Executives & Strategy for policies, templates, and training paths that support a responsible rollout.
Your membership also unlocks: