Skip to main content Skip to navigation

The Role of AI in Peer Support for Young People: A Study of Preferences for Human- and AI-Generated Responses

Project Overview

The document explores the integration of generative AI in education, emphasizing its role in enhancing peer support and mental health interventions for young people. A study examined preferences for AI-generated versus human responses to help-seeking messages on sensitive issues, revealing that while students favor AI responses for less sensitive topics, they prefer human interactions for discussions surrounding more serious matters like suicidal thoughts. This finding underscores the potential of AI to facilitate online peer support, allowing for greater self-expression and emotional assistance for students facing challenges such as anxiety, stress, and mental health crises. However, the document also addresses the limitations and ethical considerations associated with AI use in these contexts, highlighting the importance of balancing AI's benefits with the need for human empathy and understanding. Overall, it illustrates how generative AI can be a valuable tool in educational settings, particularly in providing guidance and encouragement to students, while also recognizing the critical role of human support in sensitive situations.

Key Applications

AI as a peer support tool for mental health and self-expression

Context: Online peer support for young people aged 18-24 in educational settings, particularly for those facing mental health challenges and struggling with self-expression. This includes interactions through conversational prompts to offer support and strategies for emotional well-being.

Implementation: AI interacts with students and young people through conversational prompts, generating responses to help-seeking messages and providing strategies for self-expression. These interactions are evaluated through surveys and direct feedback to assess the effectiveness of AI-generated versus human responses.

Outcomes: Improved self-awareness and emotional support for students, fostering a sense of community and understanding. Young people preferred AI responses for less sensitive topics while favoring human responses for more sensitive discussions.

Challenges: AI responses may lack the empathy and understanding required for sensitive topics, leading to criticism. There are also limitations in AI's ability to understand complex human emotions, and potential dependency on AI for emotional support.

Implementation Barriers

Ethical Challenge

Concerns about AI's inability to understand human nuances, particularly in sensitive contexts like mental health and suicide.

Proposed Solutions: AI should be employed as a supplementary tool rather than a replacement for human interaction, ensuring human oversight in sensitive situations. Additionally, AI systems should be designed with strict ethical guidelines and involve mental health professionals in their development.

Trust Issues

Participants may distrust AI-generated responses if they knew they were generated by an AI.

Proposed Solutions: Enhancing AI literacy among users to address over- or under-trust in AI systems.

Technical

AI's limitations in accurately interpreting emotional nuances and providing appropriate responses.

Proposed Solutions: Continuous training of AI models with diverse datasets and regular updates to improve understanding of emotional contexts.

Project Team

Jordyn Young

Researcher

Laala M Jawara

Researcher

Diep N Nguyen

Researcher

Brian Daly

Researcher

Jina Huh-Yoo

Researcher

Afsaneh Razi

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Jordyn Young, Laala M Jawara, Diep N Nguyen, Brian Daly, Jina Huh-Yoo, Afsaneh Razi

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies