Skip to main content Skip to navigation

Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations

Project Overview

The document explores the transformative role of generative AI in higher education, with a particular emphasis on its application in personality assessment for effective team matching among students. It highlights the innovative use of SAMI, an AI-driven teammate recommendation system that analyzes students' self-introductions to facilitate collaboration. Through surveys, the study assesses students' perceptions of SAMI's accuracy and their overall attitudes toward AI technology, revealing insights into the relationship between AI literacy and trust in AI systems. Key findings indicate that students are affected by AI misrepresentations of personality traits, underscoring the necessity for responsible AI design that aligns with users' understanding and perceptions of AI. Overall, the document underscores the potential benefits and challenges of integrating AI technologies like SAMI in educational environments, emphasizing the importance of fostering a positive relationship between students and AI to enhance collaborative learning experiences.

Key Applications

SAMI (Social Agent Mediated Interaction & AI-based Teammate Recommendation)

Context: Applicable in higher education and educational settings for students involved in team projects, such as class projects and final-year projects. The system aims to enhance collaboration by matching students based on self-introductions and personality assessments.

Implementation: Implemented through a Wizard-of-Oz approach where human researchers fabricated AI inferences based on students' self-introductions and personality assessments. SAMI uses these inputs to recommend potential teammates, facilitating better teamwork.

Outcomes: Facilitates better teamwork by matching students based on their personality and interests; enhances collaboration; provides insights into how AI literacy moderates trust and identifies reactions of over-trusting, rationalizing, and forgiving AI misrepresentations.

Challenges: AI misrepresentations can lead to confusion and diminished trust among users, particularly if users do not understand how the AI operates. There are possible inaccuracies in SAMI's inferences and concerns about privacy and trust in AI recommendations.

Implementation Barriers

Trust and Perception

Users may over-trust AI systems even after encountering misrepresentations due to a lack of understanding of AI mechanisms. Additionally, students may have concerns about the accuracy and reliability of AI recommendations.

Proposed Solutions: Design responsible AI systems that incorporate user understanding and provide tailored explanations based on user knowledge. Conduct surveys to gather feedback on AI performance and improve algorithms based on user input.

Privacy barrier

Students might be apprehensive about sharing personal information for AI analysis.

Proposed Solutions: Implement strict data privacy measures and clearly communicate how data will be used and protected.

Project Team

Qiaosi Wang

Researcher

Chidimma L. Anyi

Researcher

Vedant Das Swain

Researcher

Ashok K. Goel

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Qiaosi Wang, Chidimma L. Anyi, Vedant Das Swain, Ashok K. Goel

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies