Skip to main content Skip to navigation

Does Informativeness Matter? Active Learning for Educational Dialogue Act Classification

Project Overview

The document explores the integration of generative AI in educational settings, emphasizing its role in enhancing intelligent tutoring systems through Active Learning (AL) methods for Dialogue Act (DA) classification. It underlines the significance of sample informativeness in training classifiers, revealing that by employing statistical AL techniques, educators can effectively select more informative samples. This approach not only minimizes the costs associated with manual annotation but also boosts the performance of classifiers, ultimately leading to more effective and efficient educational tools. The findings suggest that leveraging generative AI can significantly advance the capabilities of intelligent tutoring systems, facilitating personalized learning experiences and improving overall educational outcomes.

Key Applications

Active Learning for Dialogue Act Classification

Context: Educational dialogue analysis for intelligent tutoring systems, targeting educators and researchers in AI and education.

Implementation: Utilizing statistical active learning methods to select informative training samples for the classification of dialogue acts in tutoring sessions.

Outcomes: Improved classifier performance with fewer training samples and reduced costs for manual annotation.

Challenges: The challenge of obtaining high-quality annotated samples and ensuring the classifier learns effectively from them.

Implementation Barriers

Operational

Manual annotation of dialogue acts is time-consuming and costly.

Proposed Solutions: Employing statistical active learning methods to reduce the need for extensive manual annotations by selecting the most informative samples.

Technical

Low informativeness in the majority of annotated samples can hinder the learning process of the classifier.

Proposed Solutions: Investigating and applying active learning methods to ensure a focus on high-informativeness samples.

Project Team

Wei Tan

Researcher

Jionghao Lin

Researcher

David Lang

Researcher

Guanliang Chen

Researcher

Dragan Gasevic

Researcher

Lan Du

Researcher

Wray Buntine

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Wei Tan, Jionghao Lin, David Lang, Guanliang Chen, Dragan Gasevic, Lan Du, Wray Buntine

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies