Skip to main content Skip to navigation

Understanding Human-AI Trust in Education

Project Overview

The document explores the integration of generative AI, particularly AI chatbots, in education and examines how students develop trust in these systems. It differentiates between human-like trust, rooted in interpersonal relationships, and system-like trust, which relies on the reliability of technology. The research reveals that both trust types significantly shape student perceptions, influencing their engagement, enjoyment, and perceived usefulness of AI chatbots. Notably, human-like trust plays a more critical role in fostering trusting intentions, while system-like trust is more pivotal for behavioral intentions and perceived usefulness. These insights underscore the importance of developing new frameworks to better understand the dynamics of human-AI trust in educational settings, highlighting the potential for generative AI to enhance learning experiences when effectively integrated and trusted by students.

Key Applications

AI chatbots as virtual teaching assistants for coding education

Context: Used in educational settings to provide personalized tutoring, feedback, and code generation. These chatbots assist in explanations of programming concepts and suggestions for code quality, acting as virtual pair programmers.

Implementation: Implemented through large language models that engage in human-like conversations, providing real-time feedback and assistance during coding tasks.

Outcomes: Increased student engagement, improved coding skills, and enhanced understanding of programming concepts.

Challenges: Concerns regarding students' trust calibration, leading to potential over-reliance on AI-generated content or distrust in its accuracy.

Implementation Barriers

Trust calibration

Students may over-trust or under-trust AI systems based on their anthropomorphic characteristics.

Proposed Solutions: Develop educational resources about AI limitations and promote critical evaluation of AI outputs.

Technology reliability

Concerns about the reliability and integrity of AI systems can hinder student engagement.

Proposed Solutions: Improve system consistency and provide clearer information about AI capabilities and limitations.

Project Team

Griffin Pitts

Researcher

Sanaz Motamedi

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Griffin Pitts, Sanaz Motamedi

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies