Skip to main content Skip to navigation

Personalized Parsons Puzzles as Scaffolding Enhance Practice Engagement Over Just Showing LLM-Powered Solutions

Project Overview

The document examines the application of generative AI in education, focusing on personalized Parsons puzzles as a scaffolding tool in programming learning. It compares two groups of students: one receiving complete AI-generated code solutions and another working with customized Parsons puzzles that align with their current coding tasks. The findings reveal that students engaged with Parsons puzzles spent significantly more time practicing and interacting with the material compared to those who received full solutions. This suggests that providing structured, tailored support through generative AI can enhance student engagement and lead to improved learning outcomes in programming education. The study highlights the potential of AI to not only assist in content delivery but also to foster deeper understanding and retention of programming concepts by promoting active problem-solving and sustained practice.

Key Applications

Personalized Parsons Puzzles as scaffolding for programming practice

Context: Undergraduate programming course focusing on Python basics

Implementation: Conducted a randomized between-subjects study with two conditions: one providing personalized Parsons puzzles (PC) and the other providing complete AI-generated solutions (CC).

Outcomes: Students in the PC condition engaged in practice significantly longer and showed deeper cognitive engagement compared to those in the CC condition.

Challenges: Some PC students felt the puzzles were too supportive, while CC students found that receiving full solutions hindered independent thinking.

Implementation Barriers

Technical Limitations

Some students encountered technical difficulties that reduced their learning experience.

Proposed Solutions: Future investigations will include optimizing support triggers and analyzing behavior log data for better engagement metrics.

Time Constraints

The study was limited to an 80-minute class period, which did not allow for a post-test to measure long-term learning outcomes.

Proposed Solutions: Future work aims to explore other metrics and possibly adjust the study design to allow for more comprehensive assessment.

Project Team

Xinying Hou

Researcher

Zihan Wu

Researcher

Xu Wang

Researcher

Barbara J. Ericson

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Xinying Hou, Zihan Wu, Xu Wang, Barbara J. Ericson

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies