Skip to main content Skip to navigation

Learning about Data, Algorithms, and Algorithmic Justice on TikTok in Personally Meaningful Ways

Project Overview

The document explores the transformative role of generative AI in education, particularly through platforms like TikTok, focusing on enhancing educational practices and fostering critical data literacy among students. It illustrates how AI-driven tools enable learners to investigate topics such as algorithmic justice, thereby cultivating essential critical thinking skills. Through various studies, the document underscores the potential of social media as an effective learning platform while addressing the ethical implications and societal impacts of AI in educational contexts. The overarching goal is to promote critical AI literacy within K-12 education, raising awareness about the responsibilities and challenges posed by AI technologies.

Key Applications

Generative AI tools for critical data literacy and algorithmic understanding

Context: Middle and high school students engaging with generative AI tools like TikTok and ChatGPT for creative expression, critical inquiry, and ethical understanding within their curriculum.

Implementation: Workshops and guided sessions where students utilize AI tools for research, writing, and informal audits of algorithms, fostering discussions about algorithmic justice and ethical implications of AI systems.

Outcomes: ['Students develop critical data literacy by understanding the biases in AI systems.', 'Enhanced understanding of data and algorithms.', 'Improved critical thinking skills.', 'Students advocate for systemic change regarding algorithmic justice.']

Challenges: ['Students may struggle with articulating diverse perspectives due to personal and political sensitivities around social issues.', 'Navigating the ethical implications and biases of AI tools, ensuring students become critical consumers of information.']

Implementation Barriers

Ethical and Technical Concerns

AI systems can perpetuate biases and inequalities, disproportionately affecting marginalized groups. Additionally, AI tools may not work effectively for all users, particularly those from diverse backgrounds.

Proposed Solutions: Promote discussions on algorithmic justice, involve communities in the design and application of AI systems, implement diverse datasets, and seek input from affected communities to improve AI system performance.

Project Team

Luis Morales-Navarro

Researcher

Yasmin B. Kafai

Researcher

Ha Nguyen

Researcher

Kayla DesPortes

Researcher

Ralph Vacca

Researcher

Camillia Matuk

Researcher

Megan Silander

Researcher

Anna Amato

Researcher

Peter Woods

Researcher

Francisco Castro

Researcher

Mia Shaw

Researcher

Selin Akgun

Researcher

Christine Greenhow

Researcher

Antero Garcia

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Luis Morales-Navarro, Yasmin B. Kafai, Ha Nguyen, Kayla DesPortes, Ralph Vacca, Camillia Matuk, Megan Silander, Anna Amato, Peter Woods, Francisco Castro, Mia Shaw, Selin Akgun, Christine Greenhow, Antero Garcia

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies