Skip to main content Skip to navigation

Assessing the Usability of GutGPT: A Simulation Study of an AI Clinical Decision Support System for Gastrointestinal Bleeding Risk

Project Overview

The document explores the use of generative AI in education through the implementation of GutGPT, a Large Language Model aimed at enhancing clinical decision-making for assessing gastrointestinal bleeding risk. It emphasizes the role of AI in advancing clinical decision support systems (AI-CDSS) by utilizing conversational interfaces, which may improve user engagement and accessibility. The study evaluates factors such as clinician trust, usability, and acceptance of GutGPT compared to conventional interactive dashboards. Preliminary findings reveal mixed levels of acceptance among clinicians, although there were notable improvements in their mastery of content related to gastrointestinal bleeding. However, key challenges persist, including effective human-algorithmic interaction and the critical need for clinicians to comprehend the reasoning processes of AI systems. Overall, the document underscores the potential of generative AI to transform educational practices in medical training while highlighting the importance of addressing acceptance and understanding challenges for effective implementation.

Key Applications

GutGPT - AI Clinical Decision Support System for Gastrointestinal Bleeding

Context: Clinical education and decision support for emergency medicine and internal medicine physicians, as well as medical students.

Implementation: GutGPT was integrated into clinical simulation scenarios, allowing users to interact with both GutGPT and an interactive dashboard for decision-making.

Outcomes: Preliminary results showed improved content mastery and perceived ease of use, particularly for GutGPT users, while trust and acceptance levels varied.

Challenges: Challenges included clinician trust in AI systems, understanding of AI reasoning, and integration into clinical workflows.

Implementation Barriers

Technical

Inadequate understanding of human-algorithmic interaction, clinician trust in AI systems, and difficulty capturing meaningful data.

Proposed Solutions: Improving transparency of AI systems' reasoning processes, enhancing user training, and providing training for clinicians on data interpretation and AI model usage.

Structural

Suboptimal implementation could disrupt clinical workflows and lead to inefficient use of clinician time.

Proposed Solutions: Careful integration of AI tools into existing workflows and conducting pilot studies to identify potential disruptions.

Data-related

Lack of adequate statistical expertise in AI implementation.

Proposed Solutions: Providing training for clinicians on data interpretation and AI model usage.

Project Team

Colleen Chan

Researcher

Kisung You

Researcher

Sunny Chung

Researcher

Mauro Giuffrè

Researcher

Theo Saarinen

Researcher

Niroop Rajashekar

Researcher

Yuan Pu

Researcher

Yeo Eun Shin

Researcher

Loren Laine

Researcher

Ambrose Wong

Researcher

René Kizilcec

Researcher

Jasjeet Sekhon

Researcher

Dennis Shung

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Colleen Chan, Kisung You, Sunny Chung, Mauro Giuffrè, Theo Saarinen, Niroop Rajashekar, Yuan Pu, Yeo Eun Shin, Loren Laine, Ambrose Wong, René Kizilcec, Jasjeet Sekhon, Dennis Shung

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies