Honesty About Using AI Can Erode Trust, Study Finds
New research shows disclosing AI use often lowers trust across education, business, and more. Even soft disclosures or AI-savvy audiences respond with less confidence.

Disclosing AI Use Can Backfire: New Research Reveals Trust Declines
Transparency typically builds trust in workplaces, classrooms, and homes. However, recent research from the University of Arizona’s Eller College of Management indicates that when it comes to disclosing the use of generative artificial intelligence (AI), honesty may actually reduce trust.
Martin Reimann, associate professor of marketing, and Oliver Schilke, professor of management and organizations, ran 13 experiments with over 5,000 participants. Their findings consistently showed a significant drop in trust whenever AI use was disclosed.
Trust Drops Across Different Contexts
The researchers tested various scenarios: an instructor using AI for grading, a job applicant admitting AI helped write their cover letter, and businesses revealing AI usage in advertisements. In every case, disclosure led to lower trust levels:
- Students trusted professors 16% less when AI was used in grading.
- Investors placed 18% less trust in firms disclosing AI use in ads.
- Clients trusted graphic designers 20% less after AI disclosure.
Interestingly, even participants familiar with AI and frequent users showed decreased trust when AI use was revealed.
Transparency Has Its Limits
Conventional wisdom suggests transparency fosters credibility. But this research highlights a crucial nuance: what you disclose matters. If the disclosure reflects negatively—such as appearing to take shortcuts—any trust gained from honesty may be outweighed by the penalty of the revelation.
The team tested softer language approaches, like stating AI was used only for proofreading or that a human reviewed AI outputs. None prevented the trust decline. Worse yet, trust plunged further when AI use was uncovered by a third party rather than self-disclosed.
Implications for Organizations
As AI tools become more common, organizations face tough choices about policies on disclosing AI use. This is especially critical in industries where trust is foundational, such as education, healthcare, and finance.
Trust erosion from AI disclosure doesn’t just affect individuals. It can damage team cohesion and tarnish brand reputation. Companies need to weigh whether disclosure should be mandatory or voluntary and prepare employees for the trust implications either way.
One practical step is fostering a culture where AI use is normalized and accepted, reducing stigma and potential trust gaps. For those interested in practical AI applications within management and organizations, exploring comprehensive courses can offer useful frameworks and skills. For example, Complete AI Training offers specialized programs that address AI integration in professional settings.
Looking Ahead
AI technology continues to evolve, and as familiarity grows, the trust penalty for disclosing AI use may lessen. Yet, new challenges could arise. For example, advanced AI tools can be expensive, creating an access gap between users who can afford premium platforms and those using free or limited versions.
Ultimately, the impact of AI is not just about capabilities but how its use affects human relationships and trust dynamics. Organizations and individuals will need to balance honesty with strategic communication to maintain credibility in an AI-augmented environment.