Intent Classification in Question-Answering Using LSTM Architectures
Project Overview
The document explores the integration of generative AI, particularly through Long Short-Term Memory (LSTM) networks, in education, specifically within Question-Answering (QA) systems in Natural Language Processing (NLP). It emphasizes the significance of simplifying the QA task by classifying the intent of user queries, which enhances the accuracy of the generated responses. Through the application of LSTM architecture, the document presents findings that demonstrate improvements in intent classification, which in turn facilitates the development of a prototype responder capable of providing more relevant and accurate answers to student inquiries. The outcomes indicate that leveraging generative AI in educational settings can lead to more effective learning tools that cater to individualized student needs, thereby enhancing the overall educational experience.
Key Applications
Intent classification for Question-Answering using LSTM
Context: Natural Language Processing, applicable in educational settings like online learning platforms or virtual tutoring systems.
Implementation: The system utilizes LSTM networks to classify intents from questions and generate appropriate responses.
Outcomes: Achieved high accuracy in classifying intents and demonstrated capability in generating contextually relevant responses.
Challenges: Challenges include the complexity of the QA problem and the limitations of traditional neural networks in processing sequential data.
Implementation Barriers
Technical Barrier
The QA problem is complex and involves multiple phases, making it difficult to find a generic answer.
Proposed Solutions: Adopting a modular approach to break down the problem into simpler components, such as intent classification.
Algorithmic Limitation
Traditional RNNs face issues like the vanishing gradient problem, limiting their memory to short-term contexts.
Proposed Solutions: Using LSTM architectures, which are designed to overcome short-term memory limitations.
Project Team
Giovanni Di Gennaro
Researcher
Amedeo Buonanno
Researcher
Antonio Di Girolamo
Researcher
Armando Ospedale
Researcher
Francesco A. N. Palmieri
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Giovanni Di Gennaro, Amedeo Buonanno, Antonio Di Girolamo, Armando Ospedale, Francesco A. N. Palmieri
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai