Skip to main content Skip to navigation

Accessible and Pedagogically-Grounded Explainability for Human-Robot Interaction: A Framework Based on UDL and Symbolic Interfaces

Project Overview

The document explores the use of generative AI in education by presenting a framework designed to enhance human-robot interaction, particularly for users with varying cognitive and communicative abilities. It emphasizes the significance of integrating Universal Design for Learning (UDL) principles to ensure that robotic explanations are clear and comprehensible. Key applications involve the utilization of symbolic multimodal interfaces, such as Asterics Grid and ARASAAC pictograms, alongside real-time communication systems to promote explainability and mutual understanding between humans and robots. The findings underscore the crucial role of human mediators, like teachers, in facilitating shared understanding within educational settings. Overall, the document illustrates how generative AI can be harnessed to improve accessibility and pedagogical effectiveness in education, fostering an inclusive learning environment that accommodates diverse learners' needs.

Key Applications

A framework for accessible and explainable robot interaction using Asterics Grid and ARASAAC pictograms.

Context: Educational and assistive contexts, particularly for users with cognitive or communicative support needs.

Implementation: The framework integrates symbolic multimodal interfaces with a lightweight HTTP-to-ROS 2 bridge, allowing real-time interaction and explanation triggering.

Outcomes: Enhanced mutual understanding and trust between users and robots, improved communication through accessible explanations, and support for users with special needs.

Challenges: The need for explanations to be tailored to each user's cognitive profile, potential technical limitations in real-time interaction, and the challenge of ensuring user engagement.

Implementation Barriers

Technical Barrier

Challenges in ensuring real-time communication and interaction between users and robots.

Proposed Solutions: Implementing a modular framework that integrates existing assistive technologies with standard robotic middleware.

Cognitive Barrier

Users may misinterpret the robot's capabilities and intentions due to the complexity of robotic behaviors.

Proposed Solutions: Designing explanations that are clear, simple, and adaptable to different cognitive levels, and incorporating feedback mechanisms to assess user understanding.

Project Team

Francisco J. Rodríguez Lera

Researcher

Raquel Fernández Hernández

Researcher

Sonia Lopez González

Researcher

Miguel Angel González-Santamarta

Researcher

Francisco Jesús Rodríguez Sedano

Researcher

Camino Fernandez Llamas

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Francisco J. Rodríguez Lera, Raquel Fernández Hernández, Sonia Lopez González, Miguel Angel González-Santamarta, Francisco Jesús Rodríguez Sedano, Camino Fernandez Llamas

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies