Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge
Project Overview
The document explores the application of generative AI in education, particularly focusing on AI-based decision support (ADS) tools that enhance decision-making across various sectors, including education. It highlights the necessity of training educators and administrators to effectively integrate these tools, especially in scenarios where there is a mismatch between AI predictions and the specific needs of students. The concept of 'critical use' is introduced, emphasizing the importance of human decision-makers being able to contextualize AI-generated insights within their own expertise and experiences. Research findings indicate that providing training significantly enhances users' capabilities to critically evaluate and utilize AI predictions, resulting in educational decisions that more closely reflect the judgments of seasoned professionals. This suggests that with appropriate training, generative AI can serve as a valuable resource in educational settings, improving the alignment of decisions with best practices and ultimately leading to better outcomes for students.
Key Applications
AI-assisted decision-making in child maltreatment screening using the Allegheny Family Screening Tool (AFST)
Context: Training sessions with social work graduate students and professionals in child welfare.
Implementation: Participants were engaged in practice activities that simulated real-world decision-making with AI predictions, including randomized controlled experiments with feedback mechanisms.
Outcomes: Participants improved their ability to disagree with AI predictions, made decisions aligning more closely with experienced workers, and enhanced their prediction skills regarding AI outputs.
Challenges: Participants initially relied heavily on AI predictions, and the effectiveness of explicit feedback on learning outcomes was limited compared to implicit feedback from qualitative narratives.
Implementation Barriers
Training Gap
Frontline professionals are often introduced to AI tools without adequate training, leading to potential misuse and over-reliance on AI predictions.
Proposed Solutions: Implement structured training programs that emphasize critical use and situating AI predictions in the context of human knowledge.
Target-Construct Mismatch
AI models often predict proxies that do not perfectly align with the actual decision-making goals of human users.
Proposed Solutions: Train decision-makers to recognize the limitations of AI predictions and enhance their ability to make decisions based on complementary information.
Information Asymmetry
Human decision-makers often have access to qualitative data that AI models do not, leading to challenges in decision-making.
Proposed Solutions: Encourage the integration of qualitative case narratives in training to support decision-making.
Project Team
Anna Kawakami
Researcher
Luke Guerdan
Researcher
Yanghuidi Cheng
Researcher
Matthew Lee
Researcher
Scott Carter
Researcher
Nikos Arechiga
Researcher
Kate Glazko
Researcher
Haiyi Zhu
Researcher
Kenneth Holstein
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Matthew Lee, Scott Carter, Nikos Arechiga, Kate Glazko, Haiyi Zhu, Kenneth Holstein
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai