Skip to main content Skip to navigation

Knowledge Graph-extended Retrieval Augmented Generation for Question Answering

Project Overview

The document explores the application of generative AI in education, particularly through the integration of Large Language Models (LLMs) with Knowledge Graphs (KGs) to improve Question Answering (QA) systems. It presents a novel approach called Knowledge Graph-extended Retrieval Augmented Generation (KG-RAG), which enhances the robustness, explainability, and adaptability of QA systems without necessitating additional training. This integration tackles the inherent limitations of LLMs and KGs by enhancing knowledge retrieval and reasoning capabilities, enabling more effective multi-hop information retrieval. The system is designed to provide clear reasoning chains, thereby facilitating the verification of answers. Overall, the findings indicate that the KG-RAG approach significantly bolsters the educational effectiveness of AI-driven QA systems, making them more reliable tools for learners and educators alike.

Key Applications

Knowledge Graph-extended Retrieval Augmented Generation (KG-RAG)

Context: Educational settings requiring enhanced question answering capabilities.

Implementation: The KG-RAG system combines LLMs and KGs without requiring training, using question decomposition to improve retrieval and answer explainability.

Outcomes: Improved accuracy in answering multi-hop questions and enhanced transparency in reasoning processes.

Challenges: Potential for hallucinations in LLM outputs, limitations in knowledge recall, and the need for effective integration of structured and unstructured knowledge.

Implementation Barriers

Technical Barrier

LLMs can hallucinate, generating untruthful or incoherent outputs, and may suffer from knowledge gaps.

Proposed Solutions: Integrating KGs to provide structured knowledge and improve answer accuracy.

Resource Barrier

High computational costs associated with training and deploying LLMs.

Proposed Solutions: Utilizing smaller models or models that require minimal fine-tuning.

Domain-Specific Barrier

Knowledge Graphs are often domain-specific and require significant effort to build and maintain.

Proposed Solutions: Developing adaptable models that can operate across various KGs without extensive retraining.

Project Team

Jasper Linders

Researcher

Jakub M. Tomczak

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Jasper Linders, Jakub M. Tomczak

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies