Decomposed Prompting to Answer Questions on a Course Discussion Board
Project Overview
The document explores the application of generative AI, specifically a question-answering system using a large language model (LLM) like GPT-3, in educational settings, particularly for course discussion boards. The system effectively categorizes student inquiries into four distinct types and employs tailored strategies to provide accurate responses, achieving an impressive 81% accuracy in question classification. This innovative approach not only alleviates the workload of instructors but also emphasizes the significance of precise prompting and contextual comprehension in enhancing the learning experience. By leveraging generative AI, the system aims to foster more efficient communication and engagement in educational environments, showcasing its potential to transform traditional teaching methods and improve student support. Overall, the findings underscore the effectiveness of integrating AI technologies in education, paving the way for more interactive and responsive learning environments.
Key Applications
LLM-based question-answering system using decomposed prompting
Context: Course discussion board for an introductory machine learning course
Implementation: The system classifies questions into conceptual, homework, logistics, or not answerable types and uses appropriate prompting strategies for each type.
Outcomes: Achieved 81% classification accuracy; improved management of student questions on discussion boards.
Challenges: Incorrect answers can be detrimental; issues with contextual understanding can lead to poor responses.
Implementation Barriers
Technical Barrier and Implementation Challenge
The risk of providing incorrect answers to student questions can negatively impact students' learning and increase the workload for instructors. Additionally, the effectiveness of the model is highly dependent on the quality of the prompts and the contextual cues used.
Proposed Solutions: Implementing a classification system to differentiate question types allows tailored answering strategies to mitigate risks. Future work could focus on fine-tuning the model on specific course content and improving prompt design.
Project Team
Brandon Jaipersaud
Researcher
Paul Zhang
Researcher
Jimmy Ba
Researcher
Andrew Petersen
Researcher
Lisa Zhang
Researcher
Michael R. Zhang
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Brandon Jaipersaud, Paul Zhang, Jimmy Ba, Andrew Petersen, Lisa Zhang, Michael R. Zhang
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai