Skip to main content Skip to navigation

Evaluation of mathematical questioning strategies using data collected through weak supervision

Project Overview

The document explores the innovative use of generative AI in education, specifically through a high-fidelity AI-based classroom simulator aimed at improving pre-service teachers' questioning strategies in mathematics. The simulator employs a human-in-the-loop methodology to gather high-quality training datasets focused on mathematical questioning scenarios, utilizing weak supervision for efficient data labeling. By interacting with a conversational agent that simulates student behavior, pre-service teachers can actively practice and refine their questioning techniques. This approach not only enhances their pedagogical skills but also provides valuable insights into effective questioning, ultimately aiming to foster better classroom engagement and student learning outcomes. The findings suggest that such AI-driven tools can significantly support teacher training by creating realistic teaching environments where educators can develop and evaluate their skills in a controlled yet dynamic setting.

Key Applications

AI-based classroom simulator for rehearsing questioning strategies

Context: Pre-service teacher training in mathematics

Implementation: A conversational agent simulates student interactions while providing a platform for teachers to practice and refine their questioning techniques.

Outcomes: Improvement in questioning strategies, faster labeling of training data, and enhanced teacher training experiences.

Challenges: The subjective nature of evaluating open-ended questions and the difficulty of generating high-quality training data.

Implementation Barriers

Data collection

The lack of high-quality labeled datasets specific to educational contexts, particularly for teacher questioning scenarios.

Proposed Solutions: Utilization of weak supervision and human-in-the-loop approaches to gather and label data effectively.

Evaluation challenges

Evaluating the effectiveness of questioning strategies is subjective and context-dependent.

Proposed Solutions: Developing structured rubrics like the Instructional Quality Assessment (IQA) to categorize and assess questioning types.

Project Team

Debajyoti Datta

Researcher

Maria Phillips

Researcher

James P Bywater

Researcher

Jennifer Chiu

Researcher

Ginger S. Watson

Researcher

Laura E. Barnes

Researcher

Donald E Brown

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Debajyoti Datta, Maria Phillips, James P Bywater, Jennifer Chiu, Ginger S. Watson, Laura E. Barnes, Donald E Brown

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies