Skip to main content Skip to navigation

Uni-Retrieval: A Multi-Style Retrieval Framework for STEM's Education

Project Overview

The document presents Uni-Retrieval, an innovative multi-style retrieval framework tailored for STEM education, which overcomes the shortcomings of traditional retrieval systems that primarily focus on natural text-image matching and often overlook the complexities inherent in educational materials. By utilizing a diverse range of query styles and introducing the STEM Education Retrieval Dataset (SER) comprising over 24,000 samples, Uni-Retrieval enhances the retrieval of educational resources across various modalities, including text, audio, and images, as well as different styles such as natural language, sketches, art, and low-resolution formats. The framework showcases notable performance improvements compared to existing models, highlighting its scalability and adaptability to suit various educational contexts. This advancement in generative AI technology in education signifies a pivotal step toward more efficient and effective resource retrieval, ultimately enriching the learning experience for students and educators in STEM fields.

Key Applications

Uni-Retrieval

Context: STEM education, targeting teachers and students who require diverse educational resources.

Implementation: The implementation involves a multi-style retrieval task using a dataset (SER) containing various query styles and a Prompt Bank for efficient retrieval.

Outcomes: Significant performance improvements in retrieval tasks, enabling teachers to quickly access relevant educational resources.

Challenges: Current models often overlook the variety of query types and styles prevalent in educational contexts, resulting in inefficient retrieval.

Implementation Barriers

Operational

Current retrieval models are primarily optimized for natural text-image matching and fail to capture the complexities of educational content.

Proposed Solutions: The document proposes the development of a multi-style retrieval framework that accommodates various types of queries, including text, audio, and images.

Project Team

Yanhao Jia

Researcher

Xinyi Wu

Researcher

Hao Li

Researcher

Qinglin Zhang

Researcher

Yuxiao Hu

Researcher

Shuai Zhao

Researcher

Wenqi Fan

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Yanhao Jia, Xinyi Wu, Hao Li, Qinglin Zhang, Yuxiao Hu, Shuai Zhao, Wenqi Fan

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies