University of Kansas CIDDL Develops Framework for Human-Centered AI Integration in Education
Researchers at the University of Kansas, led by James Basham, Director of the Center for Innovation, Design & Digital Learning (CIDDL) and professor of special education, have created a framework to guide responsible artificial intelligence (AI) integration across education levels from pre-kindergarten to higher education. Developed in partnership with the U.S. Department of Education, this framework outlines four key recommendations to ensure AI supports human agency and well-being.
The recommendations include establishing a human-centered foundation, future-focused strategic planning, equitable access to AI educational opportunities, and ongoing evaluation paired with professional learning and community development. This framework serves as a practical resource for schools setting up AI task forces, conducting audits, or assessing risks while preparing for ongoing technological and pedagogical changes. It responds to federal directives, including a presidential executive order encouraging AI adoption in education.
Guidance for AI in Education
The framework, titled "Framework for Responsible AI Integration in PreK-20 Education: Empowering All Learners and Educators with AI-Ready Solutions," addresses the need for thoughtful AI adoption in education. Rather than focusing solely on technology, it prioritizes human agency and well-being, stressing the importance of supporting students and educators through the transition.
Key elements include:
- Human-centered foundation: AI should augment human capabilities and foster skills like critical thinking and collaboration, not replace human roles.
- Future-focused strategic planning: Institutions should anticipate AI’s evolving role and align integration with educational goals, considering concerns like data privacy and algorithmic bias.
- Equitable access: All students, regardless of background or learning needs, must have access to AI resources and the skills needed to use them effectively.
- Ongoing evaluation and professional learning: Continuous assessment and educator training ensure AI tools enhance education without unintended negative effects.
This framework offers a foundational approach for schools to responsibly incorporate AI, emphasizing that successful integration requires more than technology deployment—it demands shifts in pedagogy, institutional structures, and community involvement.
Core Principles of the Framework
The four pillars guiding AI integration are:
- Human-Centered Foundation: Focus on preserving human agency and well-being. AI should support the holistic development of learners by enhancing creativity, critical thinking, and collaboration, while considering socio-emotional impacts.
- Future-Focused Strategic Planning: Proactively prepare for AI’s changing capabilities. Strategic plans should align AI use with curriculum design, instructional methods, and assessment, while addressing privacy and bias concerns.
- Equitable Access: Aim to close gaps in technology access and digital literacy. This includes ensuring affordability, infrastructure, and appropriate support for diverse learners and educators.
- Ongoing Evaluation and Professional Learning: Establish continuous processes for assessing AI’s impact and providing educators with the skills to use AI effectively and ethically. Community involvement is vital for sustained success.
These principles position AI as a tool to empower educators and learners, not replace them. Emphasizing equity, transparency, and adaptability, they encourage institutions to approach AI integration thoughtfully and inclusively.
Implementation and Ongoing Development
The framework’s rollout is planned as a phased process, starting with institutional self-assessment and advancing toward continuous refinement based on real-world feedback. The CIDDL team, including experts in educational technology, learning analytics, and AI ethics, is piloting the framework in school districts across Kansas and Missouri.
Data from these pilots includes student performance metrics, classroom observations, and teacher interviews. This mixed-methods approach helps validate the framework and highlights areas needing adjustment. Findings are shared at national conferences such as the American Educational Research Association (AERA) and the International Society for Technology in Education (ISTE) to encourage broader adoption.
The team also integrates principles of Universal Design for Learning (UDL) to ensure AI tools accommodate diverse learners, including students with disabilities and English language learners. Additionally, the use of explainable AI (XAI) techniques aims to make AI decision-making transparent and trustworthy for educators and students alike.
Future efforts include expanding support through funding from organizations like the National Science Foundation (NSF) and the Institute of Education Sciences (IES). The goal is to scale the framework nationally and internationally, ensuring AI empowers all learners and educators without increasing existing inequalities.
Educators interested in AI training and resources may find relevant courses and certifications at Complete AI Training, which offers a wide range of AI learning options tailored to educational professionals.
Your membership also unlocks: