Cutting Through the Clutter: The Potential of LLMs for Efficient Filtration in Systematic Literature Reviews
Project Overview
The document explores the transformative role of generative AI, particularly Large Language Models (LLMs), in enhancing educational practices, with a focus on improving the efficiency of systematic literature reviews (SLRs) in academic research. It addresses the limitations of conventional keyword-based filtering methods and introduces LLMSurver, an innovative open-source tool that leverages LLMs to streamline literature filtration. By significantly reducing the time needed for paper selection from weeks to mere minutes while ensuring high accuracy, LLMSurver demonstrates the potential of AI to augment human capabilities in research. The findings underscore the value of human-AI collaboration, providing a structured methodology for effectively integrating LLMs into the literature review process. This integration not only enhances research efficiency but also promotes a more sophisticated approach to academic inquiry, ultimately contributing to more informed educational practices and outcomes.
Key Applications
LLMSurver - an open-source tool for literature filtration
Context: Academic research, particularly for systematic literature reviews (SLRs) in computer science and related fields.
Implementation: Utilizes LLMs to automate the filtration of a large corpus of academic papers based on title and abstract, incorporating a consensus voting scheme for accuracy.
Outcomes: Reduced filtering time from weeks to minutes, high recall rates (>98.8%) in paper selection, and improved overall efficiency in conducting literature reviews.
Challenges: Potential biases in model outputs, reliance on initial corpus quality, and the need for human oversight to avoid erroneous classifications.
Implementation Barriers
Technical Barrier
LLMs can produce misleading outputs and have inherent biases due to training data.
Proposed Solutions: Implementing interactive feedback loops and visual analytics to refine outputs and address biases.
Operational Barrier
Over-reliance on automation may undermine researchers' critical thinking and analytical skills.
Proposed Solutions: Balancing automation with human oversight in the literature review process.
Access Barrier
State-of-the-art commercial models present access limitations due to cost and availability.
Proposed Solutions: Developing smaller, open-source models that can run locally to ensure accessibility for all researchers.
Project Team
Lucas Joos
Researcher
Daniel A. Keim
Researcher
Maximilian T. Fischer
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Lucas Joos, Daniel A. Keim, Maximilian T. Fischer
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai