Skip to main content Skip to navigation

Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications

Project Overview

The document explores the integration of generative AI in education through a workshop where teenagers served as auditors of machine learning applications, fostering their understanding of algorithmic systems. Participants engaged in designing and auditing projects, which enabled them to identify algorithmic biases and deliberate on algorithmic justice. This initiative underscores the significance of empowering youth in the realm of AI and machine learning (ML), promoting computational literacy among students, and establishing a framework for algorithm auditing within educational settings. The findings emphasize that such hands-on experiences not only enhance students' technical skills but also cultivate critical thinking regarding the ethical implications of AI technologies, thereby preparing them to navigate and influence the increasingly algorithm-driven world. Overall, the document advocates for educational approaches that actively involve students in the scrutiny of AI applications, equipping them with the knowledge and tools necessary to engage responsibly with emerging technologies.

Key Applications

Peer auditing of ML-powered applications

Context: Workshop with youth aged 14-15, focusing on algorithm auditing and understanding ML applications

Implementation: A two-week workshop where participants designed and audited each other's ML applications, rotating roles as auditors.

Outcomes: Participants identified algorithmic biases, discussed improvements for models, and gained new perspectives on AI/ML functionality.

Challenges: Youth lacked prior exposure to AI/ML concepts and faced challenges in systematic auditing due to inexperience.

Implementation Barriers

Knowledge barrier

Lack of transparency in ML models and insufficient understanding of AI/ML concepts among youth.

Proposed Solutions: Educational workshops and hands-on activities to enhance familiarity with AI/ML concepts.

Cognitive barrier

Difficulty in systematically auditing algorithms due to inexperience and preconceived notions about auditing.

Proposed Solutions: Guided activities and structured frameworks for conducting audits.

Project Team

Luis Morales-Navarro

Researcher

Yasmin B. Kafai

Researcher

Vedya Konda

Researcher

Danaë Metaxa

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Luis Morales-Navarro, Yasmin B. Kafai, Vedya Konda, Danaë Metaxa

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies