AI literacy and computational thinking in China's four new majors: modest links, disciplinary differences, and the role of daily AI tool use

AI literacy and computational thinking rise together, but modestly. Tool time boosts basics; discipline and environment beat gender, so structured design does the heavy lifting.

Published on: Nov 29, 2025
AI literacy and computational thinking in China's four new majors: modest links, disciplinary differences, and the role of daily AI tool use

AI literacy and computational thinking in Chinese universities: what's actually moving the needle

AI tools are in classrooms. The question is whether use translates into real capability. A large survey of undergraduates in China's "four new" majors (engineering, agriculture, medicine, liberal arts) mapped how AI literacy (AIL) relates to computational thinking (CT) and what factors matter.

The takeaway: AIL and CT move together, but the effects are small. Tool time helps some basics, not higher-order thinking. Discipline and environment matter more than gender. Treat the findings as directional signals for program design, not a silver bullet.

Study at a glance

  • Sample: 1,466 undergraduates from a provincial university in central China; broad coverage across the four new discipline clusters.
  • Measures: AI literacy (Smart Responsibility, Smart Knowledge & Skills, Intelligent Thinking, Human-Machine Collaboration) and CT (creativity, algorithmic thinking, collaboration, critical thinking, problem solving).
  • Methods: Questionnaire; t-tests, ANOVA, Pearson correlations, regressions; mediation planned via PROCESS macro. Assumptions checked; effect sizes interpreted cautiously.
  • Frameworks: Technology Acceptance Model (use and usefulness), Social Cognitive Theory (practice and self-efficacy), Constructivism (active, contextual learning).

What the data says

  • AIL-CT linkage: Statistically significant positive associations across dimensions, but uniformly small (|r| < .10). Detectable in a large sample, limited practical magnitude.
  • Strongest cross-link: Intelligent Thinking showed the comparatively strongest tie with critical thinking, still small and to be read with caution.
  • Discipline effects: New engineering students score higher on algorithmic thinking; new liberal arts show strengths in human-machine collaboration. New agriculture lags on these two; new medicine sits in the middle (small subgroup).
  • Gender: No meaningful differences across any AI literacy dimension.
  • Residence: Clear gradient-city > town > rural-for AI Responsibility, Knowledge & Skills, and Intelligent Thinking. The digital environment matters.
  • AI tool use: More daily use links to higher AI Responsibility and Knowledge & Skills; weaker or non-significant ties to Intelligent Thinking and Human-Machine Collaboration. Group differences exist across usage categories, but non-linear patterns were not tested.
  • Grade: Initial differences appeared in some dimensions, but they did not hold after conservative corrections-suggesting relative stability across years.

How this maps to international frameworks

  • Smart Knowledge & Skills → foundational AI concepts, data and models; overlaps with DigComp "Information & Data Literacy" and "Digital Content Creation."
  • Intelligent Thinking → higher-order cognition (problem modeling, critique, reflection); overlaps with DigComp "Problem Solving."
  • Human-Machine Collaboration → co-creation and boundary management; overlaps with DigComp "Communication & Collaboration."

Useful references: UNESCO Guidance for Generative AI in Education and Research and DigComp 2.2.

Implications for curriculum and program design

Hours with tools help basics; they don't automatically build thinking. Treat "use → literacy → thinking" as a mediated pathway that needs structure.

  • Design for outcomes, not screen time: Tie AI use to authentic tasks (data critique, modeling, ethical decision-making) with clear rubrics.
  • Lean on SCT: Build self-efficacy with quick wins, worked examples, and peer demonstrations before open-ended projects.
  • Use constructivist tasks: Project- and problem-based learning that requires students to frame problems, test prompts/algorithms, and reflect on trade-offs.
  • Make ethics operational: "Smart Responsibility" should include model limits, bias identification, and audit trails in deliverables.
  • Differentiate by discipline:
    • New engineering: Emphasize algorithmic decomposition, verification, and code-prompt hybrids.
    • New liberal arts: Lean into human-AI co-authoring, argument quality, and source evaluation.
    • New agriculture: Prioritize data collection quality, simple automation, and decision support with constraints.
    • New medicine: Case-based reasoning, documentation quality, and safety boundaries (non-diagnostic use).
  • Close the environment gap: Provide access points, curated toolkits, and structured practice especially for rural-background students.
  • Assess the right things: Separate foundational knowledge, ethical judgment, higher-order thinking, and collaboration in scoring.

Practical course moves (you can implement this term)

  • Weekly "AI lab": 60 minutes of guided tasks-data cleaning, prompt-response critique, simple automation-plus a 10-minute reflection.
  • Two-stage projects: Stage 1 (scaffolded) to build confidence; Stage 2 (open) to test transfer.
  • Collaboration protocol: Human-AI role sheets (who drafts, who verifies, who documents model limits) and peer review checklists.
  • Ethics in context: Require a model card snippet in every submission-data sources used, risks, and what was checked by humans.
  • Rubrics aligned to AIL and CT: Score decomposition, abstraction, evidence use, error handling, and collaboration-separately.
  • Instrumentation: Track task completion and reflection quality rather than raw tool hours. If you measure time, model potential non-linear effects.

Measurement notes and caveats

  • Effects are small. Statistical significance does not equal practical significance-treat changes as incremental.
  • Cross-sectional, self-report design raises common-method concerns; results are best used for program direction, not causal claims.
  • AI-use time was an ordinal measure; linear trends were tested, non-linear effects were not modeled.
  • Single non-elite institution; small N in new medicine. Generalizability is limited.
  • "Intelligent Thinking" and CT are closely related by design; factor analysis separated them, but conceptual overlap remains.

A simple implementation checklist

  • Define capability targets per discipline (knowledge, thinking, collaboration, responsibility).
  • Bundle tool access with guided tasks, not just tutorials.
  • Build self-efficacy early; add open-ended challenges later.
  • Score reflections. Make students state what the AI missed or got wrong.
  • Instrument your course: capture artifacts and rubric scores, then iterate.
  • Watch for equity gaps (urban-rural). Provide extra scaffolds where needed.

Where to go next

Bottom line: Treat AI literacy and computational thinking as co-developing competencies. Create structured practice, assess thinking explicitly, and adapt by discipline. Tool time helps-but curriculum design does the heavy lifting.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
🎉 Black Friday Deal! Get 86% OFF - Limited Time Only!
Claim Deal →