Skip to main content Skip to navigation

Dr. GPT in Campus Counseling: Understanding Higher Education Students' Opinions on LLM-assisted Mental Health Services

Project Overview

This document examines the role of generative AI, specifically Large Language Models (LLMs), in education, focusing on college students' perspectives regarding their use for mental health support. It addresses the rising mental health challenges faced by students and identifies the potential advantages of integrating LLM-assisted services, such as initial screenings and follow-up care. Key findings reveal that students appreciate the value of LLMs in these contexts but express concerns regarding the adequacy of emotional support and the reliability of data provided. The analysis underscores the importance of ensuring that LLMs serve as a complement to traditional mental health services, emphasizing that empathy and personalization are critical for effective support. Overall, while LLMs present promising applications in educational settings, particularly in mental health, it is essential to address the concerns raised by students to maximize their effectiveness and acceptance.

Key Applications

LLM-assisted Mental Health Services

Context: Higher education students seeking mental health support

Implementation: Pilot interviews with students exploring opinions on LLMs in five scenarios

Outcomes: Students appreciated personalized interactions and efficient initial screenings, enhancing accessibility to mental health services.

Challenges: Concerns about the depth of emotional support, reliability of data, and potential for misinformation.

Implementation Barriers

Service Availability

Shortage of mental health professionals on campuses leads to long waiting times for appointments.

Proposed Solutions: Integrating LLMs for initial screenings and follow-up care to enhance accessibility.

Emotional Limitations

LLMs may struggle to provide the same level of emotional support as human interactions.

Proposed Solutions: Design LLMs to complement human care, ensuring they can escalate cases to human professionals as needed.

Data Reliability

LLMs may propagate misinformation due to reliance on online data.

Proposed Solutions: Incorporate mechanisms for fact-checking and ensure high standards of information accuracy.

Project Team

Owen Xingjian Zhang

Researcher

Shuyao Zhou

Researcher

Jiayi Geng

Researcher

Yuhan Liu

Researcher

Sunny Xun Liu

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Owen Xingjian Zhang, Shuyao Zhou, Jiayi Geng, Yuhan Liu, Sunny Xun Liu

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies