Skip to main content Skip to navigation

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Project Overview

The document explores the innovative use of Explainable Active Learning (XAL) in education, where generative AI techniques are applied to enhance the interaction between human annotators and machine learning models. By incorporating explainable AI (XAI) into active learning environments, XAL aims to foster improved trust and understanding of AI predictions, thereby reducing cognitive workload for educators and facilitating more effective feedback. The findings suggest that providing explanations for model outputs can significantly benefit the learning process, promoting a more collaborative relationship between educators and AI systems. However, the study also addresses notable challenges, including the risk of anchoring effects that might influence judgments and the potential for increased cognitive demands on users, particularly those with less experience or knowledge in the subject matter. Overall, the document emphasizes the transformative potential of integrating generative AI in educational settings while cautioning about the complexities involved in its implementation and the need for careful consideration of user experience.

Key Applications

Explainable Active Learning (XAL)

Context: Educational context where human annotators teach machine learning models using explanations for model predictions. Target audience includes both novice and experienced users interacting with ML models.

Implementation: Implemented by integrating explanations of model predictions within an active learning framework, allowing users to provide feedback based on those explanations.

Outcomes: The study found that XAL could improve trust calibration and facilitate richer teaching feedback. However, it also revealed potential drawbacks such as cognitive overload and anchoring effects on judgment.

Challenges: Challenges include increased cognitive workload for users, potential anchoring effects on judgment, and varying effectiveness based on individual knowledge and experience.

Implementation Barriers

Cognitive Load

The introduction of explanations increases cognitive demands on users, which can hinder their ability to provide accurate feedback.

Proposed Solutions: Implementing simplified explanations and progressively disclosing information to reduce cognitive load.

Anchoring Effect

Providing model predictions alongside explanations may lead to users becoming overly reliant on the model's reasoning, especially if they lack sufficient knowledge.

Proposed Solutions: Using partial explanations that do not reveal the model's judgment or prompting users to provide their judgment before seeing the model's prediction.

Project Team

Bhavya Ghai

Researcher

Q. Vera Liao

Researcher

Yunfeng Zhang

Researcher

Rachel Bellamy

Researcher

Klaus Mueller

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies