Skip to main content Skip to navigation

Modeling Challenging Patient Interactions: LLMs for Medical Communication Training

Project Overview

The document explores the application of Large Language Models (LLMs) in the realm of education, particularly focusing on their role in medical communication training through the use of virtual patients (VPs). It underscores the capability of LLMs to emulate a variety of patient communication styles, notably through the use of 'accuser' and 'rationalizer' personas, which contribute to a more immersive learning experience. The findings suggest that these models significantly enhance medical training by offering realistic and adaptable patient interactions, which effectively prepare trainees for the complexities of clinical practice. Furthermore, the study highlights improvements in empathy and diagnostic skills among medical professionals due to the nuanced emotional and conversational traits that LLMs can replicate. Despite these advantages, the document also addresses challenges related to ensuring the authenticity and accuracy of the simulated interactions, suggesting a need for ongoing refinement in the use of LLMs in educational settings. Overall, the integration of generative AI in education, particularly in medical training, shows promising potential for improving learning outcomes while also presenting challenges that require careful consideration.

Key Applications

Large Language Models (LLMs) for Medical Communication Training

Context: Medical education for healthcare professionals, particularly in communication training.

Implementation: LLMs were used to create virtual patients that simulate distinct communication styles based on the Satir model, employing advanced prompt engineering techniques.

Outcomes: Participants rated the authenticity of the VPs high (accuser: 3.8±1.0; rationalizer: 3.7±0.8), indicating effective simulation of communication styles and improved training for handling complex patient interactions.

Challenges: Challenges included ensuring realistic emotional responses and maintaining the VPs' distinct personalities throughout interactions.

Implementation Barriers

Technical

Maintaining the realism of virtual patient interactions over extended conversations was challenging, as VPs tended to revert to neutral tones after several prompts.

Proposed Solutions: Implementing detailed behavioral instructions and prompt engineering techniques to reinforce the VPs' communication styles.

Participant Diversity

The participant pool was primarily composed of psychotherapy professionals, which may limit the generalizability of findings to other user groups.

Proposed Solutions: Future research could involve a more diverse participant pool to capture a broader range of perspectives.

Project Team

Anna Bodonhelyi

Researcher

Christian Stegemann-Philipps

Researcher

Alessandra Sonanini

Researcher

Lea Herschbach

Researcher

Marton Szep

Researcher

Anne Herrmann-Werner

Researcher

Teresa Festl-Wietek

Researcher

Enkelejda Kasneci

Researcher

Friederike Holderried

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Anna Bodonhelyi, Christian Stegemann-Philipps, Alessandra Sonanini, Lea Herschbach, Marton Szep, Anne Herrmann-Werner, Teresa Festl-Wietek, Enkelejda Kasneci, Friederike Holderried

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies