Skip to main content Skip to navigation

Making Sense of the Unsensible: Reflection, Survey, and Challenges for XAI in Large Language Models Toward Human-Centered AI

Project Overview

The document examines the significant role of Explainable AI (XAI) in the application of large language models (LLMs) within the educational sector, underscoring that XAI is essential for ethical governance, accountability, and user trust. It articulates that effective explainability must cater to diverse stakeholders—developers, end-users, and regulators—through four critical dimensions: faithfulness, truthfulness, plausibility, and contrastivity. These dimensions are vital for fostering trust and enabling the auditing of AI models. The discussion emphasizes that the opacity inherent in LLMs presents challenges that necessitate adaptive, role-sensitive explanations aligned with legal and ethical standards. Moreover, the text advocates for a human-centered approach to AI, highlighting the moral imperative for technology to enhance human reasoning instead of replacing it. Interdisciplinary collaboration is emphasized as a key strategy for developing transparent, trustworthy, and socially responsible AI systems. Overall, the document stresses that explainability is fundamental not only for technical functionality but also for ensuring that AI in education effectively supports human understanding and promotes ethical use in sensitive contexts.

Key Applications

AI-Driven Feedback and Decision-Making Assistant

Context: Educational platforms including university learning environments where AI assists in assessing student work (e.g., essays) and enhancing understanding of AI systems through explainability.

Implementation: Utilizing large language models (LLMs) to analyze student submissions, assign scores, and generate structured feedback, while integrating explainable AI frameworks to provide contextually meaningful and accessible explanations to users.

Outcomes: ['Increased student trust in AI systems', 'Enhanced metacognitive learning', 'Transparency in feedback', 'Improved decision-making capabilities', 'Greater understanding of AI functionalities']

Challenges: ['Potential opacity concerns in AI-generated feedback', 'Need for instructors to validate and annotate explanations', 'Complexity in ensuring explanations are contextually meaningful and cognitively accessible']

Implementation Barriers

Technical Barrier

Opacity of LLM architectures makes it difficult to audit decisions and ensure accountability. Additionally, embedding explainability into the complex architectures of LLMs presents challenges.

Proposed Solutions: Implementing layered, role-sensitive explanations that enable better understanding and oversight. Developing layered and audience-specific XAI systems that prioritize user understanding.

Regulatory Barrier

Compliance with legal frameworks (e.g., GDPR) requires clear, auditable explanations.

Proposed Solutions: XAI systems must integrate formal documentation and regulatory compliance features.

Ethical Barrier

The risk of miscommunication and breakdowns in trust due to opaque AI decision-making processes.

Proposed Solutions: Fostering a social contract between AI developers and communities to ensure accountability and transparency.

Project Team

Francisco Herrera

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Francisco Herrera

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies