Skip to main content Skip to navigation

Instructive artificial intelligence (AI) for human training, assistance, and explainability

Project Overview

The document explores the role of generative AI in education, focusing on its potential to enhance human training and understanding through interactive collaboration. It highlights the development of Instructive AI, which analyzes human strategies in games like Hanabi to deliver personalized instructions that improve decision-making and teamwork among learners. A key aspect of this approach is the emphasis on explainable AI (XAI), which is crucial for fostering effective interactions between humans and AI, especially in complex educational tasks where conventional methods may be insufficient. The findings suggest that by utilizing generative AI in this manner, educational outcomes can be significantly improved, leading to better engagement and learning experiences for students. Overall, the document underscores the transformative potential of generative AI in educational settings, advocating for its integration to support tailored learning paths and enhance collaborative skills among learners.

Key Applications

Instructive AI for Hanabi game

Context: Collaborative card game for human players and AI agents, targeting both AI researchers and educators.

Implementation: An AI agent observes human play and calculates recommended strategy changes to improve human performance.

Outcomes: Improved human decision-making and higher scores in collaborative gameplay.

Challenges: Humans often misjudge their strategic values, which complicates effective learning from AI.

Implementation Barriers

Understanding & Explainability

Humans may have significant discrepancies between their perceived strategies and actual strategies. Traditional XAI methods may not effectively communicate AI strategies to humans.

Proposed Solutions: The use of AI instruction to provide corrections to human strategies based on observations of AI performance, and developing AI systems that provide human-readable insights into strategy adjustments.

Project Team

Nicholas Kantack

Researcher

Nina Cohen

Researcher

Nathan Bos

Researcher

Corey Lowman

Researcher

James Everett

Researcher

Timothy Endres

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Nicholas Kantack, Nina Cohen, Nathan Bos, Corey Lowman, James Everett, Timothy Endres

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies