Self-Explanation in Social AI Agents
Project Overview
The document explores the integration of generative AI in education through the self-explanation method used by a social AI assistant named SAMI, aimed at enhancing student interaction in online learning environments. By incorporating ChatGPT within a knowledge-based framework, the approach seeks to promote transparency and trust in AI systems, allowing students to gain clearer insights into how SAMI operates. Initial findings from the deployment show that students appreciated the explanations provided, finding them beneficial for understanding the AI's functionalities. However, the document also notes challenges, including the complexity of the AI model and the need for effective evaluation of user interactions. Overall, the use of generative AI in this context underscores its potential to improve educational experiences by fostering better communication and understanding between students and AI systems.
Key Applications
SAMI (Social Agent Mediated Interaction)
Context: Online learning environment, targeting students in large classes such as the OMSCS program at Georgia Institute of Technology.
Implementation: SAMI connects students based on shared interests and characteristics. It utilizes a TMK framework to introspect on its knowledge and generate self-explanations using ChatGPT.
Outcomes: Improved social interaction among students and enhanced understanding of the AI's functionality through self-explanations.
Challenges: Complexity in creating and maintaining the TMK self-model, as well as ensuring the completeness and correctness of self-explanations.
Implementation Barriers
Technical
Creation and maintenance of the TMK model is a manual and time-consuming process, requiring updates with code changes.
Proposed Solutions: Automate the updating process of the TMK model and explore ways to integrate episodic knowledge into the self-explanation framework.
Evaluation
Initial evaluation of the self-explanation method was conducted by system developers, which may not fully represent user experiences.
Proposed Solutions: Conduct studies with actual students to gather diverse feedback and evaluate the effectiveness of the self-explanation method.
Project Team
Rhea Basappa
Researcher
Mustafa Tekman
Researcher
Hong Lu
Researcher
Benjamin Faught
Researcher
Sandeep Kakar
Researcher
Ashok K. Goel
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Rhea Basappa, Mustafa Tekman, Hong Lu, Benjamin Faught, Sandeep Kakar, Ashok K. Goel
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai