Skip to main content Skip to navigation

AI2T: Building Trustable AI Tutors by Interactively Teaching a Self-Aware Learning Agent

Project Overview

The document discusses the innovative use of generative AI in education, particularly through the AI2T system, which is designed to streamline the creation of Intelligent Tutoring Systems (ITSs). AI2T empowers non-programmers to train the AI by providing demonstrations and feedback, thus democratizing the authoring process and making it more accessible. This system employs a self-aware learning algorithm called STAND, which enhances its ability to learn from limited datasets while generating certainty scores to evaluate its learning progress. By addressing the challenges associated with traditional ITS authoring methods, AI2T significantly reduces the time and expertise typically needed to develop effective educational programs. Overall, the application of generative AI in this context not only simplifies the creation of tailored educational tools but also enhances the reliability and adaptability of tutoring systems in learning environments.

Key Applications

AI2T (Interactively Teachable AI for Authoring Intelligent Tutoring Systems)

Context: Educational technology development for instructional designers, teachers, and researchers, particularly non-programmers.

Implementation: Authors demonstrate solutions to problems and provide feedback on AI2T's attempts to solve these problems, training it to induce correct tutoring behaviors.

Outcomes: AI2T significantly reduces the time required to create ITSs from hundreds of hours to approximately 20-30 minutes of training and enables both non-programmers and programmers to produce effective tutoring systems.

Challenges: Users may struggle with providing complete feedback on all proposed actions, and the need for a clear understanding of the certainty scores to gauge when sufficient training has been completed.

Implementation Barriers

User Experience

Authors may not provide feedback on all proposed actions, leading to incomplete training of the AI.

Proposed Solutions: Implementing clearer visual indicators in the interface to prompt users to provide feedback on all actions.

Training Completeness

Participants may stop training the AI too early due to misunderstanding certainty scores.

Proposed Solutions: Provide better training and support materials to help users interpret certainty scores and understand the completion criteria.

Project Team

Daniel Weitekamp

Researcher

Erik Harpstead

Researcher

Kenneth Koedinger

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Daniel Weitekamp, Erik Harpstead, Kenneth Koedinger

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies