Skip to main content Skip to navigation

A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments

Project Overview

The document explores the application of generative AI in education, emphasizing its role in enhancing Interaction Analysis (IA) within embodied learning environments through Machine Learning (ML) and Multimodal Learning Analytics (MMLA). It details how ML can improve data analysis, yielding valuable insights into student engagement during scientific processes, specifically through a case study on photosynthesis. By integrating diverse data modalities—such as motion tracking, gaze patterns, and emotional responses—into a cohesive visual timeline, the study aims to provide researchers with a deeper understanding of students' learning behaviors. The findings suggest that leveraging generative AI not only streamlines the analytical process but also enhances the educational experience by enabling a more nuanced analysis of how students interact with and comprehend complex subjects. This approach illustrates the potential for AI technologies to transform educational practices by fostering tailored learning experiences that cater to individual engagement and understanding.

Key Applications

Visual timeline for Interaction Analysis

Context: Mixed-reality learning environments for fourth-grade students learning about photosynthesis

Implementation: Developed a visual timeline that integrates multimodal data (motion tracking, gaze, affect) to analyze student behavior during embodied learning activities.

Outcomes: Enhanced understanding of students' scientific engagement and interactions; allowed researchers to identify critical learning moments and emotional responses.

Challenges: Complexity of managing diverse multimodal data; need for accurate emotion detection among children; ensuring scalability of technology.

Implementation Barriers

Technical

Challenges in accurately detecting and interpreting multimodal data, especially in dynamic environments with multiple students.

Proposed Solutions: Utilization of advanced machine learning algorithms for improved data analysis; ongoing research for better emotion recognition models tailored for children.

Resource

The need for substantial human resources to interpret complex data from IA and AI methods.

Proposed Solutions: Implement AI-in-the-loop methods to support researchers without replacing their critical interpretative role.

Project Team

Joyce Fonteles

Researcher

Eduardo Davalos

Researcher

Ashwin T. S.

Researcher

Yike Zhang

Researcher

Mengxi Zhou

Researcher

Efrat Ayalon

Researcher

Alicia Lane

Researcher

Selena Steinberg

Researcher

Gabriella Anton

Researcher

Joshua Danish

Researcher

Noel Enyedy

Researcher

Gautam Biswas

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Joyce Fonteles, Eduardo Davalos, Ashwin T. S., Yike Zhang, Mengxi Zhou, Efrat Ayalon, Alicia Lane, Selena Steinberg, Gabriella Anton, Joshua Danish, Noel Enyedy, Gautam Biswas

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies