Three Conditions That Make Teams Rely on AI, CSUF Study Finds

CSUF study finds teams rely on AI when signaling cues of competence and trust are clear. Managers should make those signals visible, verified, and used in decisions.

Categorized in: AI News Management
Published on: Oct 10, 2025
Three Conditions That Make Teams Rely on AI, CSUF Study Finds

Human-AI Teaming: Practical Takeaways for Managers from New CSUF Research

Assistant Professor of Management Phoenix Van Wagoner at Cal State Fullerton's College of Business and Economics co-authored a peer-reviewed study on how teams decide to rely on AI. The work, published in the Journal of Organizational Behavior, applies a signaling theory lens to human-AI collaboration.

With co-authors Andria Smith and Ksenia Keplinger of the Max Planck Institute and Can Celebi of the University of Vienna, the team reports that groups are more likely to lean on AI when three conditions are present. For managers, the message is clear: structure the environment so the "signals" around AI competence and trustworthiness are visible, verified, and used in real decisions.

Why this matters

Misjudging AI can cost you twice-overuse leads to blind spots, underuse leaves savings and accuracy on the table. Signaling theory suggests people rely on credible, observable cues. Your job is to make those cues consistent, measurable, and easy to act on.

Put the research to work

  • Define decision rights: Spell out when AI recommends, when it decides, and when a human must approve. Keep it simple and visible.
  • Make competence transparent: Publish AI performance vs. a human baseline on key tasks. Share known limits, failure modes, and confidence ranges.
  • Set verification loops: A/B test AI outputs regularly, sample for errors, and review exceptions. Close the loop with quick feedback to the team.
  • Standardize inputs: Use vetted data sources, prompt templates, and version control so results are reproducible and auditable.
  • Train for calibration: Teach teams to read uncertainty, avoid automation bias, and escalate ambiguity instead of guessing.
  • Assign ownership: Name an "AI owner" for each workflow who monitors metrics, updates playbooks, and signs off on changes.
  • Stage adoption: Start with low-risk, high-volume tasks. Expand only when the metrics support it.
  • Document human overrides: List the scenarios where humans must overrule AI and why. Review these cases monthly.
  • Measure what matters: Track accuracy, time saved, error severity, rework, and adoption rate. Make the dashboard public to the team.

Fast checklist for your next AI rollout

  • Clear decision policy posted
  • Baseline accuracy vs. human measured
  • Confidence and limits disclosed
  • Owner assigned with escalation path
  • QA sampling cadence set
  • Training completed with bias coaching
  • Quarterly review scheduled with KPIs

About the research team

The study on AI convergence in human-AI teams, using a signaling theory approach, was authored by Phoenix Van Wagoner (Cal State Fullerton), Andria Smith and Ksenia Keplinger (Max Planck Institute), and Can Celebi (University of Vienna). It appears in the Journal of Organizational Behavior.

Next steps

  • Set up a pilot with clear metrics and a single owner.
  • Run a 30-day review and decide: expand, fix, or stop.
  • Scale what works and keep the signals front and center.

Want structured training paths for management roles implementing AI? Explore curated options by job at Complete AI Training.