Social Norms Reduce Algorithm Aversion and Regret in AI Adoption, Study Finds

Social norms strongly influence AI acceptance; choosing AI against these norms increases regret. Normalizing AI use reduces reluctance and fosters broader adoption.

Categorized in: AI News Science and Research
Published on: Sep 07, 2025
Social Norms Reduce Algorithm Aversion and Regret in AI Adoption, Study Finds

Artificial Intelligence Social Norms Reduce Algorithm Aversion, Study Shows

As artificial intelligence (AI) becomes more common in workplaces, understanding how people accept and use these technologies is crucial. Recent research reveals that social expectations significantly influence whether individuals choose AI over human expertise. When people go against prevailing social norms by selecting AI, they tend to experience higher regret. This psychological factor helps explain resistance to AI adoption and suggests that fostering a culture where AI use is normalized can reduce reluctance and encourage broader acceptance.

Regret Influences AI Decision-Making

People often feel more regret when choosing AI compared to human advice, especially if their choice conflicts with social norms. This regret stems from concerns about social disapproval and perceived responsibility for outcomes. The research highlights that when AI use is framed as the accepted standard, regret related to selecting AI decreases. This interaction between social norms and individual choice is a key driver in how AI is integrated into decision-making processes.

Justification Strategies and Feedback Sources

The study also examined how people justify their decisions after receiving negative feedback in work scenarios. Participants imagined themselves as new employees deciding between AI recommendations and human opinions under different normative conditions. The source of social norms—whether from colleagues or supervisors—was manipulated to observe its effect on decision justification.

Findings show that social expectations shape how individuals explain their choices, affecting learning and future behavior. Qualitative responses revealed that people are sensitive to the source of feedback and adjust their justifications accordingly. Including AI as a source of social norms is increasingly relevant as AI systems gain prominence in workplaces.

Social Norms Shape AI Adoption and Regret

An online experiment tested how social norms influenced willingness to use AI, comparing peer and supervisor influences. Participants faced scenarios choosing between AI advice and human expertise, with the normative context varied to present AI use as either accepted or counter to the norm.

Results demonstrated that counter-normative choices led to higher regret, confirming that people experience greater regret when their AI-related decisions deviate from social expectations. Interestingly, regret was initially higher when choosing AI, but framing AI use as the norm reduced this effect. Both peer and supervisor influences affected decisions similarly, indicating that broad social acceptance matters more than authority hierarchy in shaping AI adoption.

Accountability and Blame Attribution

The study found that participants tend to assign less blame to AI than to humans when errors occur, yet paradoxically, this lowers satisfaction with AI choices and increases regret. This complex relationship between accountability and regret suggests that people expect more from AI systems, possibly because they perceive technology as less fallible or less responsible.

Moreover, algorithm aversion—distrust following AI errors—is a persistent barrier. However, strategies like allowing users to adjust algorithms or presenting AI as a learning system can reduce this aversion and build trust over time.

Imitation Driven by Regret Aversion

Regret aversion plays a crucial role in imitative behavior related to AI use. People tend to conform to prevailing social norms to avoid the discomfort of regret, which explains why early adopters may hesitate. When individuals choose options against the norm, the resulting regret is stronger, reinforcing conformity.

The source of the norm—whether peers or supervisors—does not significantly change regret levels, emphasizing the power of social context over authority in decision-making involving AI recommendations.

Implications for Organizations

These findings offer practical insights for organizations aiming to promote AI integration. Establishing social norms that favor AI use can lower psychological barriers and increase acceptance. Providing employees with opportunities to engage with and customize AI tools may further reduce aversion and build trust.

For professionals interested in expanding their AI knowledge and skills to better support such integration, Complete AI Training offers a range of courses designed for various expertise levels.