Skip to main content Skip to navigation

RAD-PHI2: Instruction Tuning PHI-2 for Radiology

Project Overview

The document discusses the application of generative AI, specifically Small Language Models (SLMs), in education, with a focus on the Rad-Phi2 model used in radiology. Fine-tuned using high-quality educational content from Radiopaedia, Rad-Phi2 is capable of addressing general radiology inquiries and assisting in specific tasks within radiology workflows, such as writing impressions and extracting findings. The study reveals that SLMs demonstrate competitive performance compared to larger models while being more resource-efficient. This suggests that domain-specific training and instruction tuning are effective strategies for enhancing both the quality and efficiency of educational practices in specialized fields like radiology. Overall, the findings underscore the potential of generative AI to transform educational methodologies by improving access to expert knowledge and streamlining professional workflows.

Key Applications

Rad-Phi2

Context: AI-driven radiology workflows, targeting radiologists and clinicians.

Implementation: Fine-tuned Phi-2 using high-quality educational content from Radiopaedia for general radiology knowledge and specific tasks related to radiology reports.

Outcomes: Rad-Phi2 can accurately answer queries about symptoms, radiological findings, and treatments across various organ systems, outperforming larger models like GPT-4 in certain tasks.

Challenges: Initial Phi-2 not being instruction tuned led to verbose outputs; requires general domain instruction tuning before specialized tasks.

Implementation Barriers

Technical

The size and complexity of existing language models can be prohibitive for practical applications.

Proposed Solutions: Utilizing smaller language models like Rad-Phi2 that require fewer resources while maintaining accuracy.

Domain Specificity

Existing models are primarily trained on general domain texts, which may not suit specialized medical terminologies and tasks.

Proposed Solutions: Fine-tuning models on domain-specific datasets, such as Radiopaedia, to enhance understanding of specialized vocabulary and concepts.

Project Team

Mercy Ranjit

Researcher

Gopinath Ganapathy

Researcher

Shaury Srivastav

Researcher

Tanuja Ganu

Researcher

Srujana Oruganti

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Mercy Ranjit, Gopinath Ganapathy, Shaury Srivastav, Tanuja Ganu, Srujana Oruganti

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies