Skip to main content Skip to navigation

Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming

Project Overview

The document explores the integration of generative AI in education, particularly through the lens of Progressive Explanation Generation (PEG), which focuses on delivering coherent and sequential explanations that align with human cognitive processing. This approach is essential for enhancing learning experiences by ensuring that AI systems provide explanations that are not only accurate but also easily understood by learners. By reducing cognitive load, generative AI can significantly improve task performance and comprehension in educational settings. The effectiveness of PEG is underscored by experimental validation in engaging scenarios such as scavenger hunts and escape rooms, demonstrating its potential to facilitate better human-robot collaboration in educational contexts. Overall, the document highlights the transformative role of generative AI in creating interactive and personalized learning experiences, suggesting that structured explanations can help bridge the gap between complex AI functionalities and learners' understanding, ultimately leading to more effective educational outcomes.

Key Applications

Progressive Explanation Generation (PEG)

Context: Human-robot teaming, specifically in tasks requiring explanations of plans and decisions made by robots to human partners in complex environments.

Implementation: The approach uses goal-based Markov Decision Processes (MDP) and inverse reinforcement learning to model human preferences for information order in explanations.

Outcomes: Demonstrated improved task performance and reduced cognitive load when using PEG compared to traditional explanation methods.

Challenges: Complexity in modeling human cognitive preferences and ensuring explanations are dynamically adapted to the human's understanding.

Implementation Barriers

Cognitive Barrier

Differences in cognitive capabilities between the robot (explainer) and the human (explainee) can lead to misunderstandings and ineffective explanations.

Proposed Solutions: Implementing progressive explanations that consider human cognitive processes and preferences for information order.

Project Team

Mehrdad Zakershahrak

Researcher

Shashank Rao Marpally

Researcher

Akshay Sharma

Researcher

Ze Gong

Researcher

Yu Zhang

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Mehrdad Zakershahrak, Shashank Rao Marpally, Akshay Sharma, Ze Gong, Yu Zhang

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies