Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI
Project Overview
The document explores the role of generative AI, particularly Large Language Models (LLMs), in enhancing educational practices and frameworks, with a focus on explainable AI (XAI) solutions. It highlights the necessity for human-centered approaches that deliver clear and interpretable explanations suitable for both experts and non-experts, particularly in critical domains like healthcare. By proposing a framework that utilizes LLMs to generate context-aware explanations, the document addresses significant challenges related to AI transparency and explainability. This innovative use of generative AI not only aims to foster user engagement but also seeks to build trust in AI systems, ultimately enhancing decision-making processes in educational settings. The findings suggest that integrating these technologies can lead to improved educational outcomes, facilitating better understanding and interaction with AI tools in various learning environments.
Key Applications
Human-Centered Explainable AI (HCXAI) Framework using LLMs
Context: Healthcare, specifically in well-being monitoring for doctors and patients
Implementation: The framework uses in-context learning with LLMs to provide both technical explanations for experts and human-friendly explanations for non-experts.
Outcomes: Increased interpretability for non-experts, improved trust and engagement in AI systems, and high-quality explanations aligned with foundational XAI methods.
Challenges: Black-box nature of AI models limits transparency, varying needs for explainability between experts and non-experts, and the time-consuming post-hoc interpretation process.
Implementation Barriers
Technical and Human-Centered Barrier
The 'black-box' nature of most AI models restricts transparency and understanding. XAI explanations often do not make sense to non-experts, limiting their engagement.
Proposed Solutions: Implementing frameworks that integrate human-centered design principles, developing human-centered explanations that are interpretable and accessible to non-experts, and utilizing LLMs for generating context-aware explanations.
Resource Barrier
The need for additional post-processing and human involvement in generating explanations is time-consuming.
Proposed Solutions: Streamlining the explanation generation process using LLMs to reduce the need for extensive post-hoc interpretations.
Project Team
Eva Paraschou
Researcher
Ioannis Arapakis
Researcher
Sofia Yfantidou
Researcher
Sebastian Macaluso
Researcher
Athena Vakali
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Eva Paraschou, Ioannis Arapakis, Sofia Yfantidou, Sebastian Macaluso, Athena Vakali
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai