Skip to main content Skip to navigation

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Project Overview

The document explores the role of generative AI in education, emphasizing its application in online and blended learning environments to enhance trust and transparency through explainable AI methods. It highlights the use of predictive models that analyze clickstream data to forecast student success, assessing various explainability techniques such as LIME and SHAP to determine their effectiveness in different course contexts. Interviews with educators revealed a significant variability in their trust towards these explanation methods, with no clear consensus on which techniques were most reliable. Notably, over 85% of educators reported being able to extract actionable insights from AI-driven analyses, leading to improvements in course design. Overall, the findings underscore the potential of generative AI to inform educational practices while also pointing to the necessity for further development in explainability to foster greater educator confidence and optimize student outcomes.

Key Applications

Explainable AI methods for student success prediction (LIME, SHAP)

Context: Online and blended learning environments, targeting university-level educators

Implementation: Implemented predictive models based on clickstream data from MOOCs and flipped classrooms; conducted expert interviews to validate explanations.

Outcomes: Increased educator engagement with AI insights, identification of actionable changes in course design, and improved understanding of student success factors.

Challenges: Disagreement among educators on the trustworthiness of explanations; varying preferences for different explainability methods.

Implementation Barriers

Trust and Transparency

Lack of trust in AI methods due to their black-box nature and varying explanations across methods.

Proposed Solutions: Implementing explainability methods like LIME and SHAP to improve understanding; conducting expert interviews to validate insights.

Consistency

Inconsistency in preferences for explanation methods among educators, leading to confusion.

Proposed Solutions: Provide more concrete and granular insights, along with background information about student demographics and past knowledge.

Project Team

Vinitra Swamy

Researcher

Sijia Du

Researcher

Mirko Marras

Researcher

Tanja Käser

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Vinitra Swamy, Sijia Du, Mirko Marras, Tanja Käser

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies