Chatting Up Attachment: Using LLMs to Predict Adult Bonds
Project Overview
This document explores the application of generative AI, specifically large language models (LLMs), in education with a focus on mental health. It highlights the innovative use of synthetic data generated by LLMs to train predictive models for adult attachment styles. By successfully mimicking real human responses in Adult Attachment Interviews (AAIs), the authors showcase how this approach can advance the understanding and prediction of attachment styles while circumventing ethical concerns associated with real human data. The findings suggest that generative AI not only enhances mental health research but also facilitates the personalization of treatment, thereby demonstrating its significant potential in educational contexts related to mental health. Overall, the document underscores the transformative capabilities of LLMs in refining educational methodologies and improving mental health outcomes, while also addressing the ethical implications of synthetic data usage.
Key Applications
Using LLMs to simulate Adult Attachment Interviews (AAI) for predictive modeling of attachment styles.
Context: Mental health research, targeting clinicians and researchers in psychology.
Implementation: Created synthetic agents using LLMs to simulate human behavior and responses in AAIs, generating transcripts for model training.
Outcomes: Achieved predictive performance comparable to models trained on real human data; enhanced understanding of attachment styles and their impact on mental health.
Challenges: Difficulty in ensuring the synthetic data accurately reflects the complexity of human behavior and attachment issues.
Implementation Barriers
Ethical
Challenges related to patient privacy and the ethical use of synthetic data in mental health research.
Proposed Solutions: Using synthetic data to preserve patient confidentiality while still allowing for robust model training and testing.
Technical
Creating convincing and accurate synthetic data that adequately mimics real human responses.
Proposed Solutions: Implementing standardization techniques to align synthetic data embeddings with human data.
Project Team
Paulo Soares
Researcher
Sean McCurdy
Researcher
Andrew J. Gerber
Researcher
Peter Fonagy
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Paulo Soares, Sean McCurdy, Andrew J. Gerber, Peter Fonagy
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai