Skip to main content Skip to navigation

LLM Assistance for Pediatric Depression

Project Overview

The document explores the innovative use of Large Language Models (LLMs) in education, particularly in enhancing mental health screening among pediatric patients through the analysis of electronic health records (EHRs). It addresses the limitations of traditional depression screening methods for young individuals and introduces a zero-shot approach with LLMs to improve the extraction of depressive symptoms from free-text clinical notes. The study showcases the effectiveness of models like FLAN-T5, which can achieve high precision in identifying depressive symptoms, ultimately contributing to more consistent and efficient mental health assessments in educational settings. While acknowledging certain limitations, the findings highlight the promising role of LLMs as supportive tools in mental health diagnostics, suggesting that their integration could significantly benefit educational institutions in addressing students' mental health needs. This underscores the broader potential of generative AI to enhance educational outcomes by providing timely and accurate insights into student welfare.

Key Applications

LLM Assistance for Pediatric Depression

Context: Pediatric primary care settings focusing on young individuals aged 6-24 years experiencing depressive symptoms.

Implementation: LLMs were applied to analyze free text from electronic health records (EHRs) to extract symptoms related to depression, building on traditional screening tools like PHQ-9.

Outcomes: The study found that LLMs, particularly FLAN-T5, achieved high precision (up to 80%) in identifying depressive symptoms, improving the reliability of mental health screening.

Challenges: Challenges included the complexity of clinical notes, the need for human oversight, and the risk of overgeneralization in symptom classification.

Implementation Barriers

Technical / Ethical Barrier

Complexity of clinical notes, potential misinterpretation of PHQ-9 scores leading to diagnostic errors, and concerns about hallucinated content and biased outputs from LLMs affecting reliability.

Proposed Solutions: Employing LLMs for evidence extraction rather than direct classification to ensure human oversight and increase interpretability, while also ensuring human oversight in the diagnostic process to support clinical decision-making.

Data Barrier

Scarcity of annotated datasets for training models and the variability in symptom expression among pediatric patients.

Proposed Solutions: Using zero-shot approaches with LLMs to leverage unannotated data for symptom extraction.

Project Team

Mariia Ignashina

Researcher

Paulina Bondaronek

Researcher

Dan Santel

Researcher

John Pestian

Researcher

Julia Ive

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Mariia Ignashina, Paulina Bondaronek, Dan Santel, John Pestian, Julia Ive

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies