Skip to main content Skip to navigation

Can LLM-Simulated Practice and Feedback Upskill Human Counselors? A Randomized Study with 90+ Novice Counselors

Project Overview

The document explores the application of generative AI in education through the development and assessment of CARE, a large language model (LLM)-based training system designed for novice counselors. It emphasizes the critical role of simulated practice paired with structured feedback in improving essential counseling skills, particularly empathetic listening and client-centered approaches. The findings reveal that counselors who received AI-generated feedback during their practice sessions demonstrated significant enhancements in their skills compared to their peers who practiced without such feedback. However, the study also identifies challenges related to self-efficacy calibration, with lower-performing participants often overestimating their abilities. This highlights the potential of generative AI not only to provide personalized feedback and support but also to point out the need for additional strategies to help learners accurately assess their competencies. Overall, the document illustrates how generative AI can effectively contribute to skill development in educational settings, particularly in fields requiring interpersonal skills.

Key Applications

CARE, an LLM-simulated practice and feedback system

Context: Training novice counselors, particularly clinical students and peer supporters

Implementation: Novice counselors were randomized into two groups: one practicing with AI patients alone and the other receiving AI feedback during practice. The study measured changes in skill use and self-efficacy pre- and post-intervention.

Outcomes: Participants who received AI feedback (P+F group) showed significant improvements in core counseling skills like empathy, reflections, and questions. The P+F group adopted a more client-centered approach compared to the practice-only group.

Challenges: The practice-only group did not show improvements in skills and even exhibited decreased empathy. Self-efficacy assessments were poorly calibrated, with many participants overestimating their abilities.

Implementation Barriers

Technical

Challenges in accurately assessing and calibrating self-efficacy in counseling skills among novice counselors.

Proposed Solutions: Integrating objective performance measures and structured self-reflection into the training process.

Implementation

Resource-intensive traditional counseling training methods limit scalability. Challenges in accurately assessing and calibrating self-efficacy in counseling skills can also hinder implementation.

Proposed Solutions: Using AI systems to simulate patient interactions and provide scalable training solutions. Integrating objective performance measures and structured self-reflection can enhance the training process.

Project Team

Ryan Louie

Researcher

Ifdita Hasan Orney

Researcher

Juan Pablo Pacheco

Researcher

Raj Sanjay Shah

Researcher

Emma Brunskill

Researcher

Diyi Yang

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Ryan Louie, Ifdita Hasan Orney, Juan Pablo Pacheco, Raj Sanjay Shah, Emma Brunskill, Diyi Yang

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies