Building Trust in Artificial Intelligence Through Transdisciplinary Collaboration

AI’s rise challenges traditional trust, requiring a transdisciplinary approach to address issues like misinformation, bias, and ethics. Collaboration across fields is key to building trustworthy AI.

Categorized in: AI News Science and Research
Published on: Jul 19, 2025
Building Trust in Artificial Intelligence Through Transdisciplinary Collaboration

A Call for Transdisciplinary Trust Research in the Artificial Intelligence Era

Abstract
Trust is fundamental to human interaction and societal progress. The swift integration of artificial intelligence (AI) into daily life introduces significant societal challenges that demand a fresh look at trust. This article argues for crossing traditional academic boundaries through a transdisciplinary research framework to better understand and strengthen trust in AI. Addressing issues like misinformation, discrimination, and AI in warfare requires collaboration between scientists and stakeholders beyond disciplinary silos.

Introduction

We are entering a new era where AI systems perform tasks that mimic human intelligence, such as learning, reasoning, and decision-making. Unlike earlier technologies that automated physical labor, AI tackles cognitive challenges by bridging the gap between mechanical automation and human reasoning.

However, traditional notions of interpersonal trust do not directly apply to AI. The integration of AI raises ethical concerns and grand societal challenges, including manipulation, misinformation, discrimination, job displacement, and potential misuse in warfare. Trust in AI is crucial for its acceptance and beneficial use; a lack of trust risks social unrest, financial losses, and stalled innovation.

These challenges cannot be solved within single disciplines. A transdisciplinary approach—bringing together academics and stakeholders—is essential. A bibliometric review of over 34,000 trust research articles reveals that while multi- and interdisciplinary studies exist, true transdisciplinary efforts involving institutional stakeholders are rare. Without their perspective, trust research risks missing practical relevance and impact.

The Trust Dilemma: Addressing Grand Challenges in the AI Era

Trust facilitates collaboration, economic growth, and social welfare by allowing individuals to be vulnerable based on positive expectations. Trust is inherently risky, requiring evaluation and vulnerability. Humans develop initial trust through socialization and evolve it through experience. Importantly, trust extends beyond human-to-human interactions to include trust in technology.

Historical technological advances—such as the printing press and the Internet—reshaped societal trust by altering information flow and power structures. AI differs by challenging the core ideas of interpersonal trust, as it is not human but increasingly perceived as an agent in social contexts.

Trust in AI is complex because AI lacks intention, emotions, and moral judgment. Traditional trust dimensions like ability, benevolence, and integrity must be reinterpreted. Ability refers to technical performance; benevolence and integrity involve system design, ethical programming, and regulation. Trust in AI is thus a unique socio-technical construct shaped by human-technology and institutional interactions.

Examples of AI-Related Trust Challenges

  • Profiling: AI algorithms predict consumer behavior but can infringe on privacy and mental health when used without transparency, undermining trust.
  • Misinformation: Deepfakes created by AI threaten reputations and spread false information, damaging trust in digital content.
  • Discrimination: Large language models may perpetuate societal biases, compromising fairness and trust in AI applications.
  • Job Displacement: Autonomous AI systems can replace human roles, raising accountability concerns and eroding public trust.
  • Warfare: Opaque AI decision-making in military contexts risks unintended consequences and challenges ethical principles.
  • Singularity: The potential future emergence of superintelligent AI raises governance and alignment concerns threatening human-centric trust.

These challenges highlight the need for integrated knowledge and methods across disciplines and stakeholders to develop trustworthy AI that balances innovation with ethics.

Advancing Trust Research Through a Transdisciplinary Framework

Current trust research mostly lacks the integration of diverse perspectives necessary for addressing AI's societal challenges. Nearly 99% of studies omit institutional stakeholder input, limiting their practical impact.

A transdisciplinary framework connects societal challenges to scientific knowledge by integrating multiple perspectives and fostering collaboration between scientists and stakeholders. This framework unfolds in three phases:

  • Problem Transformation: Identify societal challenges and translate them into scientific research objectives.
  • Knowledge Production: Collaborate to generate new insights focusing on five key trust elements: trustworthiness, risk, user, sphere, and terrain.
  • Integration: Evaluate results and communicate outcomes to both scientific and societal communities.

At the core is the user, emphasizing human rights, justice, and dignity in human-AI interaction. This approach supports the design and deployment of AI technologies that align with societal values and scientific progress.

The framework is adaptable across different sectors such as healthcare, public administration, defense, and consumer technology. Trust varies by context, so evaluations must consider distinct ethical concerns and relationships. Importantly, the framework is preventive, aiming to identify and mitigate potential risks before they escalate.

Implementation Challenges and Opportunities

  • Overcoming disciplinary silos and biases requires fostering open collaboration and awareness of transdisciplinary value.
  • Bridging communication gaps demands shared frameworks and training in transdisciplinary skills.
  • Institutional funding and reward mechanisms need innovation to support cross-disciplinary research.
  • Integrating diverse methodologies and data requires strategic planning and standardized protocols.
  • Public skepticism can be addressed through transparent science communication and stakeholder engagement.

These steps are actionable for institutions, research teams, and policymakers aiming to foster effective collaboration in AI trust research. Success depends on collective commitment to overcoming these barriers and addressing the societal threats to trust posed by AI.

Looking Forward: Expanding Trust Concepts in AI

Future research must explore trust dynamics not only between humans and AI but also between AI systems themselves. Concepts like AI-to-AI trust and how AI assesses human reliability challenge traditional, human-centered trust definitions. Integrating these perspectives will enhance the adaptability and relevance of trust frameworks.

Ultimately, trust in AI shapes broader social trust and prompts reflection on human roles as creators and users of technology. This ongoing inquiry aligns technological innovation with societal values, ensuring AI systems contribute positively to human progress.

For those interested in advancing practical knowledge on AI and trust, exploring specialized courses and resources can be valuable. Resources such as Complete AI Training's latest AI courses offer structured learning paths for professionals engaged in AI research and implementation.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide