Skip to main content Skip to navigation

Large Language Models for Cancer Communication: Evaluating Linguistic Quality, Safety, and Accessibility in Generative AI

Project Overview

The document explores the application of generative AI, specifically Large Language Models (LLMs), in the field of education with a focus on health communication regarding breast and cervical cancers. It assesses the performance of general-purpose versus medical LLMs, highlighting that general-purpose models excelled in linguistic quality and safety, whereas medical LLMs offered better accessibility for users. This analysis emphasizes the importance of tailoring these models to meet the diverse needs of various patient populations, ensuring that the generated information is not only accurate but also comprehensible and effective in facilitating communication. The findings underscore the potential of LLMs to enhance educational resources in healthcare by bridging gaps in understanding and improving patient engagement through accessible information. Ultimately, the document calls for careful design and fine-tuning of these AI models to optimize their use in educational settings, particularly in conveying critical health information.

Key Applications

Large Language Models for Cancer Communication

Context: Public health communication for breast and cervical cancer patients and their families.

Implementation: Evaluation of five general-purpose and three medical LLMs using a mixed-methods framework assessing linguistic quality, safety, and communication accessibility.

Outcomes: General-purpose LLMs showed higher linguistic quality and safety, while medical LLMs demonstrated better accessibility for patients with lower health literacy.

Challenges: Medical LLMs exhibited higher levels of potential harm, toxicity, and bias, impacting their safety and trustworthiness.

Implementation Barriers

Technical

Medical LLMs often produce outputs with higher toxicity and bias compared to general-purpose models, affecting trust and safety.

Proposed Solutions: Intentional model design improvements to mitigate harm and bias, ensuring safer outputs for patient communication.

Accessibility

Complexity and readability of outputs from general-purpose LLMs may hinder understanding for patients with lower health literacy.

Proposed Solutions: Implementing readability assessments and adjustments to ensure content is accessible to a broader audience.

Project Team

Agnik Saha

Researcher

Victoria Churchill

Researcher

Anny D. Rodriguez

Researcher

Ugur Kursuncu

Researcher

Muhammed Y. Idris

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Agnik Saha, Victoria Churchill, Anny D. Rodriguez, Ugur Kursuncu, Muhammed Y. Idris

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies