Skip to main content Skip to navigation

Modeling Human-AI Team Decision Making

Project Overview

The document explores the integration of generative AI in education, focusing on how AI can enhance decision-making processes among educators and students. It emphasizes the collaborative potential of human-AI teams, utilizing models derived from traditional decision-making theories like Prospect Theory to understand the dynamics of this interaction. Key applications include personalized learning experiences, where AI tailors educational content to individual student needs, and data-driven insights that inform teaching strategies. The findings reveal that educational teams leveraging AI assistance often achieve better outcomes than those relying solely on human judgment, particularly when there is trust in AI systems. This trust is crucial, as it influences the effectiveness of AI in navigating the complexities and uncertainties inherent in educational decision-making. Overall, the document underscores the transformative potential of generative AI in enhancing educational practices through improved collaboration and informed decision-making.

Key Applications

Human-AI team decision-making models

Context: Educational context where teams answer intellective questions across various categories with the assistance of AI agents.

Implementation: Conducted experiments with teams of four humans and four AI agents, utilizing a sequence of intellective questions and a structured decision-making process.

Outcomes: Improved accuracy in decision-making through collaborative human-AI interactions, with models based on Prospect Theory showing superior predictive accuracy.

Challenges: Challenges include ensuring trust in AI agents and managing the variability in AI agent performance.

Implementation Barriers

Trust Barrier

Teams may struggle to trust AI agents, particularly if they provide incorrect answers, leading to skepticism about future AI input. This can be mitigated by implementing mechanisms for teams to evaluate and rate the performance of AI agents to build trust over time.

Complexity Barrier

Integrating AI into decision-making processes adds complexity that can overwhelm team members. Simplifying the decision-making process and providing clear frameworks for how AI input should be utilized can help address this issue.

Project Team

Wei Ye

Researcher

Francesco Bullo

Researcher

Noah Friedkin

Researcher

Ambuj K Singh

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Wei Ye, Francesco Bullo, Noah Friedkin, Ambuj K Singh

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies