How Social Influence and AI Anxiety Shape Science PhD Students’ Intentions to Use ChatGPT

PhD students’ intention to use ChatGPT is shaped by social influence, enjoyment, self-efficacy, ethical concerns, and awareness of AI limitations. These factors extend traditional tech acceptance models.

Categorized in: AI News Science and Research
Published on: Aug 14, 2025
How Social Influence and AI Anxiety Shape Science PhD Students’ Intentions to Use ChatGPT

Artificial Intelligence, Social Influence, and AI Anxiety: PhD Students’ Intentions to Use ChatGPT

The adoption of AI tools like ChatGPT is gaining traction in higher education, especially among doctoral students in science fields. Traditional frameworks such as the Technology Acceptance Model (TAM) often lack the nuance needed to fully understand how users embrace emerging AI technologies. This calls for an expanded approach that considers social, psychological, and ethical factors influencing usage.

Extending the Technology Acceptance Model

This study extends the TAM by adding five key factors: Social Influence (SI), Perceived Enjoyment (PEN), AI Self-Efficacy (AI-SE), AI’s Sociotechnical Blindness (AI-STB), and Perceived Ethics (AI-PET). By surveying 361 PhD students from 35 universities and applying Partial Least Squares Structural Equation Modeling (PLS-SEM), the research found that each of these factors significantly impacts behavioral intention to use ChatGPT.

Integrating these AI-specific constructs provides a clearer picture of doctoral students’ acceptance of AI tools in science education, highlighting important dimensions beyond usefulness and ease of use.

ChatGPT’s Role in Science Education

ChatGPT’s ability to generate human-like text and provide instant responses makes it a valuable resource for students. Its use is particularly prevalent in engineering, mathematics, and natural sciences, where it assists with coding, complex problem analysis, and concept understanding.

For example, students in chemistry and civil engineering find ChatGPT’s explanations comprehensive and helpful for grasping difficult topics. In statistics education, it supports learning technical skills like R programming.

However, the tool is not without limitations. In areas like geometry, ChatGPT may struggle with deep understanding or correcting misconceptions. The iterative nature of AI-generated content demands that students critically evaluate outputs and adapt their cognitive strategies accordingly.

Ethical and Practical Concerns

While ChatGPT offers many benefits, it raises ethical questions about academic integrity, bias, and over-reliance. The risk of plagiarism increases when students submit AI-generated content without original thought or proper citation.

Excessive dependence on ChatGPT could diminish critical thinking, problem-solving, and independent research skills. Furthermore, biased or outdated training data might perpetuate misinformation or cultural biases, threatening trust in educational processes.

Institutions need to balance leveraging AI’s advantages with fostering ethical use and critical engagement to maintain academic standards.

Factors Influencing ChatGPT Adoption Among PhD Students

  • Social Influence (SI): Peer and academic community attitudes impact willingness to adopt ChatGPT.
  • Perceived Enjoyment (PEN): Positive experiences and engagement increase usage intention.
  • AI Self-Efficacy (AI-SE): Confidence in one’s ability to use AI tools affects acceptance.
  • AI’s Sociotechnical Blindness (AI-STB): Awareness of AI’s social and technical limitations shapes behavior.
  • Perceived Ethics (AI-PET): Ethical considerations influence trust and adoption decisions.

Traditional TAM factors like Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) remain important but are complemented by these AI-specific variables for a more accurate understanding.

Implications for Research and Practice

This model offers a practical framework for universities and educators to assess and encourage responsible AI tool usage. Understanding the multifaceted reasons behind students’ behavioral intentions can guide policy-making, curriculum design, and support services.

For doctoral students in science disciplines, integrating AI tools like ChatGPT effectively requires not only technical skill but also ethical awareness and social support. Institutions might consider training programs that boost AI self-efficacy and address ethical concerns to foster balanced adoption.

Those interested in advancing their knowledge of AI applications in education can explore specialized courses at Complete AI Training to deepen their understanding and practical skills.

Conclusion

Behavioral intention to use ChatGPT among science doctoral students is influenced by a complex mix of social, psychological, and ethical factors. Expanding traditional acceptance models to include these elements provides valuable insights for promoting effective and responsible AI integration in higher education.

As AI tools continue to evolve, ongoing research should focus on refining these models and supporting students in developing not only proficiency but also critical judgment and ethical awareness in AI usage.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)