GPT-4 as Teammate Helped Individuals Match Seasoned Teams in a 776-Person P&G Trial

Generative AI acted like a teammate, helping individuals and pairs produce balanced, integrated recommendations. In a P&G trial, it narrowed experience gaps and sped decisions.

Published on: Sep 21, 2025
GPT-4 as Teammate Helped Individuals Match Seasoned Teams in a 776-Person P&G Trial

The Cybernetic Teammate: Generative AI That Makes Teams More Cross-Functional

Cross-functional teams exist to balance perspectives. In practice, experts stick to their lanes. Commercial folks push market angles. R&D dives deep on technical solutions. Integration often happens only when people are paired across functions.

A recent working paper tested whether generative AI can close that gap. In a randomized controlled trial of 776 Procter & Gamble professionals, participants worked alone or in cross-functional pairs. Half received access to GPT-4/4o from OpenAI. The question: can AI act like a teammate that brings missing perspectives to the table?

What changed with AI

  • Without AI: Specialists proposed solutions that aligned with their own domain. Balance improved mainly when people worked in cross-functional pairs.
  • With AI: Individuals and teams produced integrated recommendations that combined commercial and technical factors. The functional boundary line blurred.
  • Experience gap narrowed: Less experienced employees, when assisted by AI, performed at levels comparable to more seasoned teams.

Why it worked

  • Broader context on demand: AI injected knowledge outside the user's silo, surfacing considerations they might have missed.
  • Faster iteration: Participants could test multiple angles quickly (market, technical, regulatory, customer) and converge on balanced options.
  • Integrated framing: Prompts nudged users to consider trade-offs across functions, not just within their specialty.

What this means for leaders and researchers

Generative AI can act as a context expander for individuals and teams. It improves the odds that a single contributor can think cross-functionally and that a pair can move faster to an integrated answer.

Beyond better teamwork, this signals a new path for capability building. With the right prompts and review practices, employees can grow practical business acumen while they work-reducing the time it takes to build broad judgment across functions.

Run a 30-day pilot

  • Pick 2-3 use cases where siloed thinking slows decisions (e.g., product launch planning, pricing with technical constraints, feasibility assessments).
  • Form test cells: individuals, same-function pairs, and cross-functional pairs. Randomize AI access across cells.
  • Provide a prompt kit that forces integration: problem framing, stakeholder map, commercial model, technical risks, regulatory checks, and customer impact.
  • Benchmark outcomes: rate proposals on balance (commercial + technical), quality, speed, and rework.
  • Debrief weekly: collect what AI added (facts, frameworks, scenarios) and where human oversight corrected issues.

Prompt patterns that drive integration

  • Dual-lens prompt: "Give two plans: one optimized for market impact, one for technical feasibility. Then synthesize a plan that balances both. List top 5 trade-offs."
  • Stakeholder cross-check: "From the viewpoints of Sales, R&D, Finance, and Legal, what would each praise or block? Resolve the conflicts."
  • Assumption audit: "List core assumptions by function. Mark those with the highest uncertainty and propose quick tests."

Guardrails

  • Human review: Require subject-matter validation for facts, figures, and regulatory claims.
  • Source checks: Ask the model to cite plausible sources, then verify independently.
  • Data sensitivity: Keep proprietary data out of unsecured prompts; use approved tools and privacy settings.

Metrics to track

  • Balance score: Independent raters assess how well outputs integrate commercial and technical factors.
  • Time-to-decision: Cycle time from brief to recommendation.
  • Rework rate: Number of revisions needed to reach stakeholder sign-off.
  • Experience equalization: Performance variance between junior and senior contributors with AI assistance.

Capability building, baked into work

  • AI as a tutor: Use it to explain concepts across functions (e.g., contribution margin, TRL, PMF), with short quizzes inside the workflow.
  • Mental model library: Create reusable prompts for unit economics, risk matrices, design-for-manufacture, and customer jobs-to-be-done.
  • Rotation-by-prompt: Simulate cross-functional rotations by asking AI to critique proposals from each function's perspective.

The takeaway

Generative AI acts like a teammate that supplies missing context and pushes integrated thinking. It helps individuals think beyond their specialty and helps teams converge faster on solutions that hold up across the business. With clear prompts, guardrails, and measurement, you can raise the floor on decision quality while you build broader capability across your workforce.

Next steps

  • Set up a small, measured pilot and publish the playbook internally once you've validated gains.
  • If you need structured upskilling paths for managers, product leaders, and researchers, explore courses by job at Complete AI Training.