PwC trains juniors to supervise AI and take on manager roles

AI is taking routine audit work, and PwC says first-years will act like managers, reviewing AI. Training now leans into judgment, evidence, and controls as pricing and KPIs shift.

Categorized in: AI News Management
Published on: Mar 09, 2026
PwC trains juniors to supervise AI and take on manager roles

AI is taking entry-level audit work. PwC is training first-years to operate like managers.

AI is changing the bottom of the org chart. PwC says within three years, junior accountants in its assurance division will act more like managers - reviewing and supervising AI that handles routine audit tasks.

"People are going to walk in the door almost instantaneously becoming reviewers and supervisors," said Jenn Kosar, PwC's AI assurance leader. Data gathering and processing move to machines; human attention shifts to judgment, client context, and edge cases.

What's actually changing at PwC (and why it matters to managers)

  • Role compression: First-years function like fourth-years. The learning curve gets steeper, faster.
  • Work mix flips: Less execution, more review. Humans focus on materiality, anomalies, and client-specific risks.
  • Manager span shifts: Fewer task check-ins, more outcome reviews and exception handling.
  • Career path accelerates: Soft skills and judgment move to year one instead of years three to five.

Training goes "back to basics" - with a modern twist

PwC is reweighting early training around audit fundamentals: what evidence is, how risk is assessed, and why procedures exist. The firm is pushing deeper critical thinking, negotiation, and professional skepticism earlier than before.

Translation for leaders: don't teach people to click buttons. Teach them to think like supervisors on day one - then give them AI to execute.

Implications for your operating model

  • RACI for AI-driven work: Define who prompts, who reviews, who signs off, and who owns exceptions. No gray areas.
  • Evidence standards: Require traceable AI outputs: data lineage, prompt history, model version, and change logs.
  • Human-in-the-loop controls: Mandate review for high-risk assertions, unusual variances, and areas with weak source data.
  • Sampling and coverage: Use AI for full-population tests where feasible, then direct humans to investigate anomalies.
  • Documentation that stands up: Store rationale for overrides, thresholds chosen, and why the procedure provides sufficient appropriate evidence.

KPIs that keep quality (and accountability) intact

  • Quality: Exception false-positive/false-negative rates, reperform rates, and review findings per engagement.
  • Speed: Cycle time from data request to reviewed workpaper; time-to-resolution for exceptions.
  • Coverage: Percentage of population tested; depth on high-risk accounts.
  • Client value: Number of actionable insights delivered (beyond compliance) and time saved for client teams.

Pricing and client expectations are shifting

Kosar notes clients are asking how AI can fully take over certain business tasks. That pressures the traditional billable-hours model and favors outcomes-based pricing tied to results and quality.

Prepare for buyers who expect speed, transparency, and explainability - and who compare you to a tool that answers instantly.

Risk, assurance, and governance you cannot skip

  • Model risk and bias: Validate outputs against gold-standard samples; track drift and edge cases.
  • Data privacy and access: Lock down source systems; log who accessed what and when.
  • Explainability: Require rationale for flags and conclusions that a client or regulator can follow.
  • Regulatory alignment: Map controls to recognized frameworks like the NIST AI Risk Management Framework.

What managers should do next (90-day plan)

  • Redesign roles: Write junior job descriptions as "AI reviewers/supervisors." Specify decision rights and escalation paths.
  • Build the review stack: Standard prompts, test datasets, exception taxonomies, documentation templates, and approval workflows.
  • Stand up guardrails: Segregation of duties for data prep, AI execution, and sign-off. Independent checks for high-risk areas.
  • Train for judgment first: Risk assessment, materiality, skepticism, and negotiation - then system specifics.
  • Rebase KPIs: Shift from "hours logged" to "quality, coverage, and client outcomes."
  • Pilot and iterate: Run two engagements with AI-heavy workflows. Capture benchmarks and lessons learned; lock in playbooks.
  • Communicate up and out: Set expectations with partners and clients on timelines, quality controls, and documentation they can expect.

For finance and audit leaders building talent pipelines

If first-years are supervising machines, your hiring bar changes. Look for curiosity, system thinking, and comfort with ambiguity over pure task execution.

Give them earlier exposure to client conversations and exception handling. Coach them to ask better questions, not to do more clicks.

Helpful resources

The bigger shift

Kosar expects AI to reach senior roles too, changing the nature of client requests and how services are delivered. The point isn't faster checklists - it's better judgment at scale.

Managers who redesign roles, training, and metrics now will compound an advantage. Those who wait will inherit someone else's playbook - and their margins.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)