Skip to main content Skip to navigation

Words of Wisdom: Representational Harms in Learning From AI Communication

Project Overview

The document examines the role of generative AI in education, highlighting its potential benefits and challenges, particularly concerning representational harms in AI communication. It emphasizes that AI-generated language can inadvertently convey identity information that misrepresents or perpetuates stereotypes about marginalized groups. Through a case study on Visual Question Generation (VQG), the document underscores the necessity of considering identity in AI applications to foster equality, diversity, and inclusion (EDI) within educational settings. The findings suggest that while generative AI can enhance learning experiences and personalize education, it also requires careful implementation to avoid reinforcing biases. Ultimately, the document advocates for a mindful integration of generative AI in education that prioritizes fair representation and promotes an inclusive learning environment.

Key Applications

Visual Question Generation (VQG)

Context: Educational settings, particularly for young learners and underserved communities.

Implementation: Crowdsourcing demographic data to create a dataset for generating questions based on images, involving participants from various demographics, especially those historically underserved.

Outcomes: Creation of a dataset that includes demographic information, allowing for analysis of how identity influences perceptions of AI-generated questions.

Challenges: Potential for representational harms if the generated language reflects dominant cultural identities, and difficulties in gathering a diverse participant pool for data collection.

Implementation Barriers

Implementation Barrier

Challenges in gathering sufficient and diverse data from underrepresented demographic groups due to limitations of crowdsourcing platforms.

Proposed Solutions: Consider alternative recruitment methods or enhance existing platforms to better capture a wider range of identities.

Cultural Barrier

Difficulty in creating AI language that authentically represents diverse identities without reinforcing stereotypes.

Proposed Solutions: Consult with impacted communities and iterate on design to ensure inclusivity in AI communication.

Project Team

Amanda Buddemeyer

Researcher

Erin Walker

Researcher

Malihe Alikhani

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Amanda Buddemeyer, Erin Walker, Malihe Alikhani

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies