Skip to main content Skip to navigation

Risks of AI Foundation Models in Education

Project Overview

The document explores the application of generative AI, particularly foundation models, in education, highlighting both its potential benefits and significant challenges. Key applications include personalized learning experiences, automated content generation, and enhanced engagement through interactive tools. However, the authors caution that while generative AI can offer scalability and efficiency, it also poses risks such as the homogenization of educational experiences, the perpetuation of inequities, and reduced stakeholder participation in the design of educational content. Historical precedents are cited to illustrate how technology has often fallen short of its promises, potentially reinforcing existing disparities rather than alleviating them. The findings emphasize the need for careful consideration of these risks as educational institutions increasingly adopt generative AI, advocating for a more inclusive and equitable approach to integrating such technologies into educational practices to ensure they serve to enhance, rather than undermine, educational equity and diversity.

Key Applications

AI-driven educational feedback and monitoring systems

Context: Educational institutions, classrooms, and remote learning environments where students engage in writing tasks and are monitored for well-being and academic integrity.

Implementation: Utilizing pre-trained foundation models and AI-powered tools to provide automated feedback on student writing and to monitor student activities.

Outcomes: Potential for personalized feedback on student writing, improved student well-being, and enhanced academic integrity.

Challenges: Risks of homogenization, perpetuation of existing inequities, reproduction of dominant cultural ideologies, exclusion of minoritized perspectives, invasion of privacy, and potential harm to students.

Implementation Barriers

Equity

Foundation models may reproduce existing inequities in education, limiting access and opportunities for marginalized groups.

Proposed Solutions: Critical evaluation of data and model design to ensure inclusivity and representation.

Participation

Limited involvement of educational stakeholders in the design of AI systems may disempower teachers and students.

Proposed Solutions: Implementing participatory design processes that include teachers and learners in decision-making.

Surveillance

Educational surveillance technologies may harm students and invade their privacy.

Proposed Solutions: Establishing clear ethical guidelines for the use of surveillance technologies in education.

Project Team

Su Lin Blodgett

Researcher

Michael Madaio

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Su Lin Blodgett, Michael Madaio

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies