Skip to main content Skip to navigation

The moral authority of ChatGPT

Project Overview

The document examines the role of generative AI, specifically ChatGPT, in education, highlighting its significant yet inconsistent impact on users' moral judgment and decision-making. An experiment revealed that while ChatGPT provides varying moral advice, it still influences users' choices, indicating that individuals often underestimate the chatbot's persuasive power, which raises ethical concerns regarding its use in guiding moral decisions. The findings point to the necessity of fostering digital literacy among users to navigate the complexities posed by AI interactions effectively. By enhancing users' understanding of AI's capabilities and limitations, educators can better prepare students to critically evaluate AI-generated content, ultimately leading to more informed and responsible decision-making in an increasingly AI-integrated educational landscape.

Key Applications

ChatGPT as a moral advisor

Context: Educational context where users interact with ChatGPT for moral dilemmas.

Implementation: An online experiment where users faced moral dilemmas and received advice from ChatGPT.

Outcomes: ChatGPT's inconsistent advice influenced users' moral judgments, even when users knew they were interacting with a chatbot.

Challenges: ChatGPT lacks a firm moral stance, leading to potentially corrupting influences on users' judgments.

Implementation Barriers

Ethical barrier

Users may follow ChatGPT's inconsistent moral advice, leading to misguided judgments. Transparency about ChatGPT being a chatbot does not mitigate its influence on users' judgments.

Proposed Solutions: Improving users' digital literacy to understand the limitations of AI and developing better educational approaches to convey the impact and limitations of AI advice.

Project Team

Sebastian Krügel

Researcher

Andreas Ostermaier

Researcher

Matthias Uhl

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Sebastian Krügel, Andreas Ostermaier, Matthias Uhl

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies