Skip to main content Skip to navigation

MAARTA:Multi-Agentic Adaptive Radiology Teaching Assistant

Project Overview

The document introduces MAARTA (Multi-Agentic Adaptive Radiology Teaching Assistant), an innovative multi-agent framework aimed at improving radiology education through personalized feedback on perceptual errors in diagnostic interpretation. By utilizing advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), MAARTA analyzes gaze patterns and radiology reports to support students in overcoming the limitations posed by insufficient expert mentorship. The system dynamically recruits specialized agents based on the complexity of the errors encountered, thereby enhancing the learning experience and diagnostic proficiency of radiology students. Despite its potential, the implementation of MAARTA faces challenges related to the processing of multimodal data and the coordination of various agents. Overall, the findings suggest that generative AI can significantly transform educational methodologies in specialized fields like radiology by providing tailored, effective learning support.

Key Applications

MAARTA (Multi-Agentic Adaptive Radiology Teaching Assistant)

Context: Radiology education for students learning to interpret diagnostic images.

Implementation: Employs a multi-agent system that analyzes eye gaze data and radiology reports to provide personalized feedback on perceptual errors.

Outcomes: Improved diagnostic accuracy and individualized mentorship, enhanced understanding of perceptual errors.

Challenges: Complexity in processing multimodal data (eye gaze patterns and radiology reports) and ensuring efficient communication between agents.

Implementation Barriers

Technical barrier

Existing AI systems do not adequately explain how and why perceptual errors occur in radiology.

Proposed Solutions: Utilizing multi-agent systems that adaptively analyze gaze patterns and provide detailed feedback on diagnostic errors.

Data barrier

Limited availability of datasets capturing student radiologist perceptual errors.

Proposed Solutions: Simulating perceptual errors using existing datasets to create a balanced dataset for evaluation.

Project Team

Akash Awasthi

Researcher

Brandon V. Chang

Researcher

Anh M. Vu

Researcher

Ngan Le

Researcher

Rishi Agrawal

Researcher

Zhigang Deng

Researcher

Carol Wu

Researcher

Hien Van Nguyen

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Akash Awasthi, Brandon V. Chang, Anh M. Vu, Ngan Le, Rishi Agrawal, Zhigang Deng, Carol Wu, Hien Van Nguyen

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies