Skip to main content Skip to navigation

Violation of Expectation via Metacognitive Prompting Reduces Theory of Mind Prediction Error in Large Language Models

Project Overview

The document explores the use of generative AI in education, particularly through the implementation of a metacognitive prompting framework that leverages Large Language Models (LLMs) to enhance AI tutors' effectiveness. This framework employs a concept from developmental psychology known as Violation of Expectation (VoE) to minimize Theory of Mind (ToM) prediction errors, enabling the AI to better understand and anticipate user needs and behaviors. The application developed, called Bloom, demonstrates notable improvements in predicting user interactions, thereby enhancing the overall learning experience. However, the document also notes challenges such as latency issues and the necessity for improved data retrieval methods, indicating that while generative AI holds promise in educational settings, further refinements are required to maximize its effectiveness and efficiency. Overall, the findings suggest that integrating advanced AI techniques like LLMs can significantly impact personalized learning and user engagement in educational contexts.

Key Applications

Bloom, a free AI tutor available on the web and Discord

Context: Educational context aimed at users interacting with an AI tutor for learning purposes

Implementation: Utilized a metacognitive prompting framework to reduce ToM prediction errors through VoE

Outcomes: Reduced prediction errors in user input, with statistically significant improvements in prediction accuracy

Challenges: Latency issues and the complexity of managing user input data.

Implementation Barriers

Technical Barrier

Latency in processing user interactions, which reduces the number of conversation turns and affects user experience.

Proposed Solutions: Optimizing the VoE data retrieval schemes and improving the prompting framework.

Data Management Barrier

Challenges in managing and utilizing psychological data derived from user interactions while ensuring user privacy and data security.

Proposed Solutions: Implementing encryption and confidential computing techniques to protect user data.

Project Team

Courtland Leer

Researcher

Vincent Trost

Researcher

Vineeth Voruganti

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Courtland Leer, Vincent Trost, Vineeth Voruganti

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies