Generative AI Won't Scale Until Companies Rebuild How They Work, Says Accenture CEO
Accenture CEO Julie Sweet says AI at scale fails without rewiring orgs: mindsets, structures, and processes. PoCs work, but bolt-ons to legacy ops stall impact.

Human and Structural Hurdles Are Stalling AI at Scale, Says Accenture's CEO
September 27, 2025 at 11:23 PM GMT+8
Accenture CEO Julie Sweet says the hard part of generative AI isn't the model-it's the organization. The real barrier is rewiring mindsets, structures, and processes that were built for a different era.
Proof-of-concepts work. Scaling fails when companies try to bolt AI onto legacy operations without changing how work is done.
Accenture is making the shift itself, reversing "five decades" of established work structures. The transformation has been tough-senior leaders have left, and long-standing models are being rebuilt.
Data from IndexBox suggests this kind of shift demands significant investment in change management-often beyond the pure technology spend. The core bottleneck is misalignment between strategy, structure, and performance measurement, which slows response to clients and stalls impact.
For Operations Leaders: What to Fix First
- Clarify the business outcomes AI must improve (cycle time, cost to serve, service levels) and align incentives to those outcomes.
- Redesign processes end-to-end with AI "in the loop" instead of adding tools on top of broken workflows.
- Realign org structure: move from siloed functions to cross-functional, product- or process-based squads with clear ownership.
- Rewrite metrics: track flow efficiency and touchless rates, not just departmental utilization.
- Reset governance for speed and safety: standard prompts, human-in-the-loop checkpoints, model change controls, and data guardrails.
- Prepare for leadership and talent shifts: new roles, re-skilling, and-when needed-hard calls on legacy roles.
A 90-Day Operating Model Playbook
- Weeks 1-2: Pick 2-3 high-volume, rules-heavy processes where AI can remove rework. Define the target state and the new owner.
- Weeks 3-4: Map the current workflow. Remove steps before adding AI. Decide where AI drafts, routes, or summarizes-and where humans decide.
- Weeks 5-8: Stand up a cross-functional squad (Ops, Data, Risk, IT). Build standard prompts, quality checks, and exception paths.
- Weeks 9-10: Pilot at real volume with guardrails. Track cycle time, touchless rate, error rate, and SLA adherence daily.
- Weeks 11-12: Decide to scale or stop. If scaling, codify playbooks, training, and metrics; move ownership to the line team.
Metrics That Matter
- Cycle time and queue time per process
- Touchless/automation rate and rework rate
- SLA adherence and defect rate
- Cost to serve per transaction
- Compliance incidents and audit exceptions
- User adoption and override rates (signal of trust and model quality)
Budget and Risk: Plan for the Real Costs
- Expect change management, process redesign, and training to rival-sometimes exceed-your tech spend.
- Bake in ongoing costs: model updates, prompt libraries, monitoring, and risk reviews.
- Institutionalize controls: data classification, red-teaming, incident response, and vendor oversight.
Leadership Principles That Make This Work
- Set a few non-negotiable outcomes and clear decision rights. Remove layers that slow them down.
- Reward speed to learning, not just perfect accuracy. Scale what works; retire what doesn't-fast.
- Model the behavior: courage to change roles and structures; humility to test, measure, and iterate.
Sweet's message is blunt: to capture AI's upside, you must be willing to rewire how the organization works. Tools help, but structure, incentives, and culture decide whether you scale or stall.
Related reading: Fortune's coverage on enterprise AI adoption offers useful context for operators here.
Need focused upskilling by function? See role-based AI programs at Complete AI Training.
Source: IndexBox Market Intelligence Platform