Skip to main content Skip to navigation

Mutual Theory of Mind for Human-AI Communication

Project Overview

The document explores the use of generative AI in education through the lens of the Mutual Theory of Mind (MToM) framework, which enhances human-AI communication, particularly in online learning environments. It highlights the potential for AI systems to cultivate a theory of mind, enabling them to comprehend user perceptions and deliver tailored feedback that facilitates learning. By emphasizing iterative communication processes and incorporating user feedback, the MToM framework aims to refine AI's understanding of human intentions and educational needs. Key applications include personalized learning experiences, adaptive feedback mechanisms, and improved engagement, ultimately leading to more effective educational outcomes. The findings suggest that when AI can effectively interpret and respond to user inputs, it not only enhances the learning experience but also fosters a supportive educational atmosphere that encourages student participation and achievement. Overall, the document underscores the transformative role of generative AI in personalizing education and improving interactions between learners and AI systems.

Key Applications

AI agent for user interaction and feedback

Context: This application has been utilized in both online learning environments, such as discussion forums for students, and in settings where college students interact with AI systems for personality assessments. The AI engages with students to answer questions and provides personality insights, fostering a supportive learning atmosphere.

Implementation: An AI agent was deployed in various educational contexts, such as online class discussion forums and personality assessment interactions. It employed semi-structured interviews and survey experiments to gather feedback and assess user reactions. The AI was designed to interpret student interactions and adjust its responses based on linguistic cues, thereby enhancing user engagement and understanding.

Outcomes: The AI successfully interpreted student feedback, adjusted responses accordingly, and improved perceptions of its anthropomorphism and intelligence. User reactions varied from trust to skepticism, influenced by their prior knowledge of AI. Overall, the AI's responsiveness contributed positively to user satisfaction, while also revealing the complexities of managing user expectations.

Challenges: Challenges included managing user expectations, as they often exceeded the AI's capabilities, leading to potential frustration and communication breakdowns. Additionally, misrepresentations by the AI could confuse users and impact their trust, necessitating careful design strategies to mitigate these issues.

Implementation Barriers

Expectations vs. Reality

Users often have unrealistically high expectations of AI systems, leading to frustration when AI fails to meet these expectations. AI misrepresentations can damage user trust and lead to abandonment of AI systems.

Proposed Solutions: Enhancing user education about AI capabilities, developing personalized repair strategies and explanations that adapt to user knowledge and perceptions, and implementing adaptive feedback mechanisms to align expectations.

Project Team

Qiaosi Wang

Researcher

Ashok K. Goel

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Qiaosi Wang, Ashok K. Goel

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies