Skip to main content Skip to navigation

Toxicity in ChatGPT: Analyzing Persona-assigned Language Models

Project Overview

The document examines the role of generative AI, particularly large language models (LLMs) like CHATGPT, in education, highlighting both its potential applications and the associated risks. It reveals that when LLMs are assigned specific personas, they can generate toxic and harmful content, reinforcing stereotypes and biases, which poses significant challenges in sensitive contexts such as education and healthcare. The findings underscore the necessity for educators and institutions to understand the capabilities and limitations of these AI tools to ensure responsible usage. The document stresses the importance of mitigating the risks of harmful outputs through informed implementation and highlights the need for ongoing research into AI behavior to safeguard against misuse in educational settings. Overall, while generative AI can enhance educational experiences, its deployment must be approached with caution to prevent negative consequences.

Key Applications

CHATGPT

Context: Used in educational settings where students interact with AI for learning purposes.

Implementation: CHATGPT was deployed with persona assignments to generate responses in a controlled environment.

Outcomes: Increased efficiency in generating educational content; however, toxicity was observed based on persona assignments.

Challenges: Potential for generating toxic or harmful content, especially when assigned negative personas.

Implementation Barriers

Safety and Toxicity of AI Outputs

Generative AI can produce harmful, toxic, or inappropriate responses, particularly when assigned certain personas. This can negatively impact learning environments and student well-being.

Proposed Solutions: Implement stricter safety mechanisms and guidelines for persona assignments; establish public-facing specification sheets detailing toxicity risks; implement robust content moderation and ethical guidelines for AI usage in educational contexts.

Misunderstanding of AI Behavior

Educators and students may lack understanding of how generative AI works, leading to misuse and reliance on inaccurate or harmful information.

Proposed Solutions: Providing training and resources to educate users about AI's capabilities and limitations.

Project Team

Ameet Deshpande

Researcher

Vishvak Murahari

Researcher

Tanmay Rajpurohit

Researcher

Ashwin Kalyan

Researcher

Karthik Narasimhan

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies