Not Just Training, Also Testing: High School Youths' Perspective-Taking through Peer Testing Machine Learning-Powered Applications
Project Overview
The document explores the integration of generative AI in K-12 education, focusing on the significance of peer testing within machine learning (ML) curricula. It underscores the dual necessity of training and testing ML models, advocating for student engagement in the testing process to enhance their comprehension of model performance factors, including data diversity and context. Through peer testing, students are encouraged to reflect critically on the effectiveness of ML applications, fostering skills such as perspective-taking and analytical thinking. The findings indicate that this participatory approach not only deepens students' understanding of ML concepts but also improves their overall performance in applying these technologies, demonstrating the potential of generative AI to enhance educational outcomes by promoting active learning and critical engagement in technological processes.
Key Applications
Peer testing of ML-powered physical computing projects
Context: K-12 education, specifically high school youths (ages 13-15)
Implementation: Conducted a two-week workshop where students created ML classifiers for electronic textile projects and engaged in peer testing each other's models.
Outcomes: Participants reflected on model performance, identified issues related to data diversity and dataset size, and made recommendations for improvements.
Challenges: Some participants struggled with understanding the iterative nature of testing and may have needed additional scaffolding to fully appreciate the testing process.
Implementation Barriers
Educational Practice Barrier
Current ML education focuses predominantly on training models, with little emphasis on the importance of testing.
Proposed Solutions: Integrating structured testing sessions into the curriculum and providing students with prompts and scaffolding to facilitate testing.
Cognitive Barrier
Youths may find it challenging to identify certain issues in model performance, such as biases in data or class design.
Proposed Solutions: Researching and implementing scaffolds that help students better understand and recognize these complex issues.
Project Team
L. Morales-Navarro
Researcher
M. Shah
Researcher
Y. B. Kafai
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: L. Morales-Navarro, M. Shah, Y. B. Kafai
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai