Skip to main content Skip to navigation

Utilizing Natural Language Processing for Automated Assessment of Classroom Discussion

Project Overview

The document explores the application of generative AI, specifically Natural Language Processing (NLP) techniques, in the realm of education, focusing on the automation of assessing classroom discussion quality. It addresses the challenges faced in evaluating discussions at scale and presents a study that utilized various NLP models to generate rubric scores based on discussion transcripts. The findings indicate that these models can effectively automate the assessment process, though their effectiveness varies according to specific instructional quality rubrics. Overall, the study demonstrates the potential of generative AI to enhance educational practices by providing scalable, objective evaluations of classroom interactions, thereby supporting educators in assessing student engagement and discussion quality more efficiently.

Key Applications

Automated assessment of classroom discussions using NLP techniques

Context: Classroom settings for fourth and fifth grade English Language Arts in a Texas district, focusing on low-income students.

Implementation: Applied pre-trained language models and sequence labeling with BiLSTM to predict rubric scores from discussion transcripts.

Outcomes: Improved scoring accuracy and interpretability of classroom discussion quality assessments, enabling real-time feedback for teachers.

Challenges: Limited dataset size, imbalanced data for certain ATM codes, and sensitivity to misclassification affecting IQA scores.

Implementation Barriers

Data Limitations and Model Performance

The dataset is limited in size and diversity, which may affect the reliability of the models. Different NLP approaches yield varying performance across specific IQA rubrics, indicating that no single model is universally effective.

Proposed Solutions: Exploring transfer learning techniques to leverage data from other classroom discussions. Utilizing a combination of models to cover different rubrics and improving model interpretability.

Project Team

Nhat Tran

Researcher

Benjamin Pierce

Researcher

Diane Litman

Researcher

Richard Correnti

Researcher

Lindsay Clare Matsumura

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Nhat Tran, Benjamin Pierce, Diane Litman, Richard Correnti, Lindsay Clare Matsumura

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies