Skip to main content Skip to navigation

EHR Interaction Between Patients and AI: NoteAid EHR Interaction

Project Overview

The document explores the application of generative AI, particularly large language models (LLMs), in education, focusing on its role in enhancing patient education about Electronic Health Records (EHRs) through the NoteAid EHR Interaction Pipeline. It highlights two primary functions: offering clear explanations of EHR content and addressing patient inquiries regarding their records. The findings indicate that LLMs can significantly improve patients' comprehension and engagement by simplifying complex medical information and making it more accessible. This innovative approach not only empowers patients to take a more active role in their healthcare but also showcases the broader potential of generative AI in educational contexts, facilitating better communication and understanding in various fields. The outcomes suggest that integrating such AI technologies can lead to improved educational experiences, ultimately benefiting both patients and healthcare providers by fostering informed decision-making and enhancing the overall quality of care.

Key Applications

NoteAid EHR Interaction Pipeline

Context: Patient education in understanding Electronic Health Records (EHRs), targeting patients who have difficulty comprehending medical jargon.

Implementation: The pipeline uses LLMs to simulate a conversation between a mock patient agent and an assistant agent, facilitating Q&A and text explanation tasks based on EHR notes.

Outcomes: Improved patient understanding and engagement with EHR content, leading to better adherence to medical recommendations and enhanced patient empowerment.

Challenges: Limitations include the potential for LLMs to generate inaccurate information (hallucination), the need for careful deployment to ensure patients correctly understand the generated explanations, and a relatively small sample size for human evaluation.

Implementation Barriers

Technical

The LLMs may produce hallucinations or incorrect information, which can mislead users. Limited human evaluation of the AI-generated responses may not accurately reflect their effectiveness.

Proposed Solutions: Implementing rigorous evaluation methods and human oversight in the deployment of the AI system, along with conducting more extensive human evaluations to assess the quality and effectiveness of the AI-generated explanations.

User Interaction

Patients may not engage effectively with AI-generated explanations, limiting the educational impact.

Project Team

Xiaocheng Zhang

Researcher

Zonghai Yao

Researcher

Hong Yu

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Xiaocheng Zhang, Zonghai Yao, Hong Yu

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies