Decoding Student Cognitive Abilities: A Comparative Study of Explainable AI Algorithms in Educational Data Mining
Exploring students’ cognitive abilities remains a vital focus in education. This study applies data-driven artificial intelligence (AI) models combined with explainability techniques and causal inference to identify factors influencing cognitive skills. It also compares how different explainable AI algorithms interpret educational data mining models, shedding light on their unique perspectives.
Unpacking the Study
Five AI models were built to analyze educational data. Four interpretability algorithms—feature importance, Morris Sensitivity, SHAP, and LIME—were used to provide global insights into the results. Additionally, Propensity Score Matching (PSM) causal tests helped verify which factors truly impact students’ cognitive abilities.
Findings consistently pointed to self-perception and parental expectations as significant influences. However, each explainability algorithm highlighted different top features, revealing varied inclinations in interpreting the models. Morris Sensitivity offered a more balanced view, SHAP and feature importance showed diverse angles, while LIME provided a unique perspective.
Why Focus on Cognitive Abilities?
Cognition is the foundation for how students process and apply information, directly affecting learning outcomes. Developing higher-order skills like critical thinking and creativity helps students become independent learners. Improving cognitive abilities has long been a research priority, traditionally approached through classroom methods and emotional education.
While conventional studies relied heavily on statistics to understand cognitive development and its influencers, AI introduces a fresh, data-driven approach. Yet, many AI models remain “black boxes,” making it hard to interpret their decisions—this is especially problematic in education where understanding individual learning processes is crucial.
Explainability in AI: A Closer Look
AI models fall into two categories: white-box and black-box. White-box models (like decision trees and linear regression) are transparent and easier to interpret. Black-box models (such as neural networks and support vector machines) are more complex and opaque.
Interpretability can be global (understanding model behavior across the entire dataset) or local (explaining individual predictions). This study focuses on global interpretability to understand the broader factors impacting cognitive abilities.
Key Factors Affecting Cognitive Abilities
- Personal Characteristics: Demographics and students’ spontaneous states significantly influence cognitive development.
- Family Background: Family environment and expectations often have complex pathways affecting cognition.
- Growth Experience: Educational exposure and social interactions shape psychological and intellectual growth.
- Teacher-Student Relationship: Constructive feedback and personalized guidance help develop self-regulation and meta-cognitive strategies.
Applying Explainable AI in Educational Data Mining
While some studies have used explainable AI in education, few have conducted large-scale evaluations comparing multiple models and interpretability methods. This research fills that gap, providing a comprehensive assessment of AI’s role in understanding student cognition.
Methodology Overview
The study analyzed data from freshmen at two higher education institutions, drawing on the China Education Panel Survey (CEPS) to gather relevant student information.
Five machine learning algorithms—Lasso regression, Random Forest, XGBoost, Neural Networks, and Support Vector Machines—were employed. Hyperparameter optimization ensured each model was fine-tuned for the best performance.
After model selection, interpretability algorithms SHAP, LIME, and Morris Sensitivity were applied to explain how each feature influenced predictions.
Results and Insights
The study confirmed that data preprocessing and model choice significantly impact performance. Importantly, self-perception and parental expectations consistently ranked among the top factors influencing cognitive abilities across all explainability methods.
However, differences emerged in how each algorithm prioritized features, emphasizing the importance of using multiple interpretability tools to gain a well-rounded understanding.
Conclusion
Explainable AI algorithms play a crucial role in educational data mining by clarifying the factors that influence students’ cognitive abilities. This study demonstrates that combining several interpretability approaches provides richer insights than relying on a single method.
Educators and researchers can leverage these findings to better design interventions targeting self-perception and parental involvement, ultimately supporting cognitive development more effectively.
For those interested in expanding their knowledge of AI applications in education and beyond, exploring courses on explainable AI and machine learning can be a valuable next step.
Your membership also unlocks: