Designing AI Systems that Augment Human Performed vs. Demonstrated Critical Thinking
Project Overview
The document examines the role of generative AI, especially large language models (LLMs), in enhancing critical thinking within educational contexts. It differentiates between performed critical thinking, which is independent of AI assistance, and demonstrated critical thinking, which is supported by AI. A key focus is on ensuring that AI systems are designed to bolster independent critical thinking rather than promote reliance on AI tools. While generative AI can significantly boost productivity in areas like writing and coding, there are concerns about its potential to weaken cognitive skills by encouraging overdependence. The paper advocates for research methodologies that assess both types of critical thinking and outlines design strategies for AI tools that effectively support the development of independent critical thinking skills in learners.
Key Applications
ChatGPT and similar LLMs
Context: Educational settings focusing on writing and coding skills for students or professionals
Implementation: Used as a tool to assist with writing and coding tasks
Outcomes: Increased speed and fluency in writing, improved coding quality and efficiency
Challenges: Does not inherently improve individuals' critical thinking or understanding; potential for biased outcomes and lack of depth in content.
Implementation Barriers
Cognitive
Overreliance on AI systems for tasks may undermine human cognitive capabilities.
Proposed Solutions: Design AI systems that support independent critical thinking rather than just output generation.
Project Team
Katelyn Xiaoying Mei
Researcher
Nic Weber
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Katelyn Xiaoying Mei, Nic Weber
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai