Skip to main content Skip to navigation

Exploiting ChatGPT for Diagnosing Autism-Associated Language Disorders and Identifying Distinct Features

Project Overview

The document explores the transformative role of generative AI, specifically ChatGPT, in the education sector, particularly in diagnosing language disorders linked to autism spectrum disorder (ASD). It critiques traditional diagnostic methods for their subjectivity and resource demands, presenting ChatGPT as a more effective alternative that enhances the sensitivity and precision of diagnoses. Research findings indicate that ChatGPT surpasses conventional models, such as BERT, in recognizing language patterns associated with autism, thereby offering valuable insights for developing personalized treatment plans. The study underscores the potential of integrating advanced AI technologies in clinical environments to refine the assessment and diagnosis of developmental disorders, advocating for a shift towards more efficient and reliable diagnostic practices in education and beyond. Overall, the document emphasizes the significance of generative AI in improving educational outcomes for individuals with language disorders.

Key Applications

ChatGPT for diagnosing language disorders in autism

Context: Clinical setting for diagnosing autism spectrum disorder (ASD)

Implementation: ChatGPT processes examiner-patient dialogues to identify language deficits. The model analyzes features such as echolalia and pronoun reversal.

Outcomes: ChatGPT significantly improved diagnostic sensitivity and positive predictive value (PPV), outperforming traditional supervised learning models.

Challenges: Limited dataset size and the need for extensive labeled data for training AI models.

Implementation Barriers

Data-related barrier

High data requirements for training effective machine learning models, particularly for autism diagnosis.

Proposed Solutions: Utilizing zero-shot and few-shot learning capabilities of large language models (LLMs) like ChatGPT to reduce reliance on large labeled datasets.

Explainability barrier

Many machine learning models function as 'black boxes', making it difficult to interpret how predictions are made.

Proposed Solutions: ChatGPT offers human-like explanations for its diagnostic outputs, enhancing transparency and trust in clinical applications.

Project Team

Chuanbo Hu

Researcher

Wenqi Li

Researcher

Mindi Ruan

Researcher

Xiangxu Yu

Researcher

Shalaka Deshpande

Researcher

Lynn K. Paul

Researcher

Shuo Wang

Researcher

Xin Li

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Chuanbo Hu, Wenqi Li, Mindi Ruan, Xiangxu Yu, Shalaka Deshpande, Lynn K. Paul, Shuo Wang, Xin Li

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies