Skip to main content Skip to navigation

Inconsistencies in the Definition and Annotation of Student Engagement in Virtual Learning Datasets: A Critical Review

Project Overview

The document provides a critical review of the use of generative AI in education, particularly focusing on measuring student engagement (SE) in virtual learning environments. It identifies significant inconsistencies in definitions and annotation protocols used in current datasets, which impede the creation of generalizable AI models for accurately assessing SE. The authors stress the necessity for standardized definitions and robust annotation methodologies to enhance the validity and comparability of engagement data across various studies. By addressing these challenges, the document suggests that generative AI can play a transformative role in understanding and improving student engagement through better data analysis and insights. Ultimately, the findings underscore the potential of AI applications in education, while highlighting the essential groundwork needed to ensure effective implementation and measurement practices.

Key Applications

AI models for automatic measurement of student engagement

Context: Virtual learning environments, primarily for students engaged in online or computer-based learning sessions.

Implementation: Algorithms developed using supervised machine learning techniques that rely on annotated datasets containing various types of data (video, audio, etc.).

Outcomes: Improved ability to objectively assess student engagement in real-time, potentially leading to enhanced learning outcomes and reduced dropout rates.

Challenges: Inconsistent definitions and annotation protocols across datasets that complicate model training and comparison.

Implementation Barriers

Data Quality

Inconsistencies in definitions and annotation protocols for student engagement across datasets.

Proposed Solutions: Establishing standardized definitions and protocols for engagement measurement across studies to ensure comparability and reliability.

Implementation Feasibility

Challenges in collecting high-quality annotated data due to reliance on expert judgment or self-reports, which can be labor-intensive and subject to bias.

Proposed Solutions: Utilizing automated data collection methods and incorporating both observer-based and self-reporting annotations for better accuracy.

Project Team

Shehroz S. Khan

Researcher

Ali Abedi

Researcher

Tracey Colella

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Shehroz S. Khan, Ali Abedi, Tracey Colella

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies