A novel interface for adversarial trivia question-writing
Project Overview
The document explores the innovative use of generative AI in education through the development of a human-in-the-loop interface designed to create adversarial trivia questions, specifically targeting the enhancement of question-answering AIs in Quiz Bowl competitions. This interface empowers users to generate complex questions that challenge both human participants and AI systems, incorporating key features like real-time feedback from machine learning algorithms, a modular design for user adaptability, and gamification elements to boost engagement and participation. The findings suggest that such a system not only enhances the quality of trivia questions but also fosters a collaborative learning environment where users can interact with AI in meaningful ways, ultimately contributing to the advancement of AI capabilities in educational settings. The outcomes indicate a promising avenue for improving AI's performance in knowledge acquisition and problem-solving through creative and competitive educational activities.
Key Applications
Human-in-the-loop interface for adversarial trivia question-writing
Context: Quiz Bowl competitions, targeting educators, students, and trivia enthusiasts
Implementation: Developed as a modular interface using Flask and Vue.js, integrating machine learning models for feedback and question difficulty assessment
Outcomes: Facilitates the creation of challenging questions, promotes user engagement through gamification, and enhances the diversity of trivia questions
Challenges: Issues with the buzzer position regressing during question input, inaccuracies in identifying underrepresented countries, and the difficulty classifier miscategorizing question difficulty
Implementation Barriers
Technical
The interface's buzzer position regressed during input, leading to user confusion about when the machine would buzz correctly. Additionally, the underrepresentation module struggled with accurately identifying and highlighting underrepresented countries due to its character-based analysis.
Proposed Solutions: Adjust the confidence threshold for buzzing and implement a more sophisticated entity linking algorithm to improve matching of user answers and machine guesses. Shift to a word-based search method for identifying key terms in questions.
Technical
The question difficulty classifier inaccurately classified all sample questions as high school-level despite their college-level clues.
Proposed Solutions: Train the classifier to evaluate the difficulty of individual sentences or clues rather than the entire question.
Project Team
Jason Liu
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Jason Liu
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai