Skip to main content Skip to navigation

Investigating the performance of Retrieval-Augmented Generation and fine-tuning for the development of AI-driven knowledge-based systems

Project Overview

Generative AI, especially through the use of large language models (G-LLMs), has significantly impacted education by facilitating the creation of AI-driven knowledge-based systems. Key techniques such as fine-tuning (FN) and retrieval-augmented generation (RAG) are essential for customizing G-LLMs to specific educational contexts. RAG has demonstrated superior efficiency and a reduction in hallucinations compared to traditional fine-tuning methods, leading to improved outcomes in AI applications within education. These advancements allow for more effective personalized learning experiences, enhanced access to resources, and improved student engagement. Overall, the integration of generative AI in education is reshaping pedagogical approaches and providing innovative solutions to traditional challenges, highlighting its potential to transform learning environments and educational practices.

Key Applications

AI-driven knowledge-based systems using G-LLMs

Context: Educational contexts where knowledge retrieval and generation are needed, targeting educators and students.

Implementation: Utilizing techniques like fine-tuning (FN) and retrieval-augmented generation (RAG) to develop G-LLM-based systems.

Outcomes: RAG-based systems demonstrated 16% better ROUGE scores and reduced hallucination rates compared to FN models.

Challenges: Integrating FN and RAG can be complex, and FN models may exhibit higher hallucination rates.

Implementation Barriers

Technical Barrier

The integration of FN and RAG is non-trivial and may lead to performance degradation.

Proposed Solutions: Further research into best practices for combining FN and RAG to optimize performance.

Project Team

Robert Lakatos

Researcher

Peter Pollner

Researcher

Andras Hajdu

Researcher

Tamas Joo

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Robert Lakatos, Peter Pollner, Andras Hajdu, Tamas Joo

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies