AI-Enhanced Cognitive Behavioral Therapy: Deep Learning and Large Language Models for Extracting Cognitive Pathways from Social Media Texts
Project Overview
The document explores the role of generative AI, particularly deep learning and large language models (LLMs), in enhancing educational practices, specifically through applications akin to Cognitive Behavioral Therapy (CBT). It highlights how AI technologies can analyze social media texts to uncover cognitive pathways that help psychotherapists identify cognitive distortions in patients discussing mental health issues. The study assesses the effectiveness of different AI models in tasks such as hierarchical text classification and text summarization, revealing that deep learning models perform well in extracting cognitive pathways, while LLMs demonstrate superior capabilities in summarization. However, the use of LLMs is accompanied by challenges, particularly the tendency for these models to produce inaccurate information, known as hallucination. Overall, the findings underscore the potential of generative AI to support educational and therapeutic contexts by providing insights into mental health discussions and improving the efficiency of data analysis, although caution is needed in addressing the limitations of the technology.
Key Applications
Extracting cognitive pathways from social media texts using deep learning and LLMs.
Context: Mental health care, specifically for psychotherapists and counselors working with patients expressing negative emotions on social media.
Implementation: Models were trained on annotated social media data to categorize cognitive distortions based on a cognitive theoretical framework.
Outcomes: Improved identification of cognitive pathways, aiding psychotherapists in conducting effective interventions online.
Challenges: Deep learning models performed better in classification tasks, while LLMs faced challenges of hallucination and potential inaccuracies in summarization.
Implementation Barriers
Technical
Challenges related to data quality and quantity for training models effectively.
Proposed Solutions: Utilizing pre-trained models and transfer learning to alleviate data scarcity issues.
Ethical
The potential for LLMs to generate hallucinated content, leading to inaccuracies in therapeutic contexts.
Proposed Solutions: Implementing rigorous validation processes and ensuring human oversight in interpreting AI-generated outputs.
Project Team
Meng Jiang
Researcher
Yi Jing Yu
Researcher
Qing Zhao
Researcher
Jianqiang Li
Researcher
Changwei Song
Researcher
Hongzhi Qi
Researcher
Wei Zhai
Researcher
Dan Luo
Researcher
Xiaoqin Wang
Researcher
Guanghui Fu
Researcher
Bing Xiang Yang
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Meng Jiang, Yi Jing Yu, Qing Zhao, Jianqiang Li, Changwei Song, Hongzhi Qi, Wei Zhai, Dan Luo, Xiaoqin Wang, Guanghui Fu, Bing Xiang Yang
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai