Skip to main content Skip to navigation

Towards a Learner-Centered Explainable AI: Lessons from the learning sciences

Project Overview

The document explores the integration of generative AI in education, emphasizing the design and evaluation of explainable AI (XAI) systems through a learner-centered approach that aligns AI with human learning objectives. It presents a framework aimed at systematically developing XAI to enhance understanding and decision-making among learners. A notable case study in AI-augmented social work illustrates the practical application of this framework, underscoring the importance of collaborative design involving various stakeholders to refine educational goals and improve the effectiveness of AI tools in authentic learning environments. The findings suggest that when AI systems are designed with a focus on human-centered learning, they can significantly enhance educational outcomes by fostering deeper comprehension and facilitating better decision-making processes. Overall, the document highlights the potential of generative AI to transform educational practices by ensuring that AI technologies are aligned with the needs and goals of learners, ultimately leading to more effective and engaging learning experiences.

Key Applications

Allegheny Family Screening Tool (AFST)

Context: AI-augmented social work, targeting social workers handling child maltreatment referrals.

Implementation: Collaborative design processes involving social workers to define learning objectives and create training materials.

Outcomes: Enhanced ability for social workers to understand and effectively integrate AI predictions into their decision-making processes.

Challenges: Initial lack of understanding of the AI tool among social workers, differing objectives between the AI predictions and workers' immediate decision-making needs.

Implementation Barriers

Organizational

Limited integration of AI tools into the daily workflows of social workers, leading to confusion and inefficiency.

Proposed Solutions: Co-design training materials with social workers to align AI outputs with their decision-making contexts.

Technical

Challenges in ensuring AI models align with the real-time needs and objectives of social workers.

Proposed Solutions: Iterative design and feedback processes that involve stakeholders in defining success metrics and learning objectives.

Project Team

Anna Kawakami

Researcher

Luke Guerdan

Researcher

Yang Cheng

Researcher

Anita Sun

Researcher

Alison Hu

Researcher

Kate Glazko

Researcher

Nikos Arechiga

Researcher

Matthew Lee

Researcher

Scott Carter

Researcher

Haiyi Zhu

Researcher

Kenneth Holstein

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Anna Kawakami, Luke Guerdan, Yang Cheng, Anita Sun, Alison Hu, Kate Glazko, Nikos Arechiga, Matthew Lee, Scott Carter, Haiyi Zhu, Kenneth Holstein

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies