Skip to main content Skip to navigation

ClickSight: Interpreting Student Clickstreams to Reveal Insights on Learning Strategies via LLMs

Project Overview

The document discusses the application of generative AI in education through the introduction of ClickSight, a Large Language Model (LLM)-based pipeline that analyzes student clickstream data from digital learning environments. By evaluating different prompting strategies, ClickSight generates interpretations of student behaviors during their interactions with educational content. The findings underscore the potential of LLMs to provide deeper insights into student engagement and learning patterns, which can inform instructional design and personalized learning. However, the document also addresses challenges such as ensuring the quality of interpretations and the generalizability of results across diverse educational contexts. Overall, the use of generative AI like ClickSight represents a significant advancement in harnessing data to enhance educational outcomes and improve the understanding of student behavior in digital learning settings.

Key Applications

ClickSight

Context: Educational environments including PharmaSim (pharmacy assistant training) and Beer’s Law Lab (virtual chemistry lab for secondary students).

Implementation: ClickSight utilizes student clickstream data as input, applies predefined learning strategies, and leverages various prompting methods to generate interpretations. It includes a self-refinement step for improving output quality.

Outcomes: High-quality interpretations of student clickstreams with insights into learning strategies, aiding in understanding student behaviors and providing timely interventions.

Challenges: Varying interpretation quality based on prompting strategies and the complexity of clickstream data, which can lead to errors and inconsistent outputs.

Implementation Barriers

Technical Barrier

High dimensionality and granularity of clickstream data make interpretation challenging and require significant manual effort for expert labeling. Interpretation quality also varies based on prompting strategies, and self-refinement can introduce errors.

Proposed Solutions: Utilizing LLMs to automate interpretation and reduce the reliance on manual feature engineering. Evaluating multiple prompting strategies and refining outputs based on expert feedback to ensure quality.

Project Team

Bahar Radmehr

Researcher

Ekaterina Shved

Researcher

Fatma Betül Güreş

Researcher

Adish Singla

Researcher

Tanja Käser

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Bahar Radmehr, Ekaterina Shved, Fatma Betül Güreş, Adish Singla, Tanja Käser

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies