Skip to main content Skip to navigation

Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks

Project Overview

This document examines the role of generative AI, particularly explainable artificial intelligence (XAI), in education, focusing on its influence on learning outcomes during decision-making tasks. Through a simulated nuclear power plant management scenario, it evaluates the effectiveness of various explanation strategies—comparing classical and adaptive approaches—when students interact with AI agents. The study reveals that adaptive explanations enhance decision-making speed and user engagement with technology; however, they do not significantly improve learning outcomes when participants interact with humanoid robots. Notably, individuals who engaged in autonomous learning outperformed those who depended on AI assistance, indicating potential drawbacks associated with over-reliance on AI in educational settings. Overall, while generative AI can facilitate certain aspects of learning, the findings underscore the importance of balancing AI usage with autonomous learning strategies to optimize educational outcomes.

Key Applications

Explainable AI (XAI) in decision-making tasks

Context: Educational context involving learning-by-doing tasks for non-expert users.

Implementation: Participants interacted with either a computer or a humanoid robot, using either classical or adaptive explanation strategies.

Outcomes: Adaptive explanations facilitated faster decision-making and increased engagement but did not improve overall learning outcomes compared to self-taught participants.

Challenges: Participants tended to over-rely on AI suggestions, leading to reduced exploration and generalization of learned tasks.

Implementation Barriers

Cognitive Bias

Over-reliance on AI suggestions can lead to a lack of exploration and reduced learning efficacy.

Proposed Solutions: Incorporating cognitive forcing strategies to encourage critical thinking when interacting with AI.

Project Team

Marco Matarese

Researcher

Francesco Rea

Researcher

Katharina J. Rohlfing

Researcher

Alessandra Sciutti

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Marco Matarese, Francesco Rea, Katharina J. Rohlfing, Alessandra Sciutti

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies