Skip to main content Skip to navigation

Training Novices: The Role of Human-AI Collaboration and Knowledge Transfer

Project Overview

The document explores the transformative impact of generative AI in education, particularly through Human-AI Collaboration (HAIC) aimed at training novices by facilitating the transfer of task-specific expert knowledge (TSEK) from subject matter experts (SMEs). It addresses challenges in training posed by demographic shifts and workforce reductions, proposing a framework that leverages AI systems to effectively impart both explicit and tacit knowledge. The paper details a preliminary experimental design intended to evaluate the effectiveness and feasibility of utilizing AI systems as trainers for individuals lacking prior TSEK. Key applications identified include enhancing knowledge transfer, improving training efficiency, and supporting learners in acquiring complex skills. Findings suggest that such collaboration could yield significant benefits, although challenges related to the intricacies of knowledge transfer must be navigated. Overall, the document underscores the potential of generative AI to revolutionize educational practices by bridging gaps in expertise and fostering a more effective learning environment.

Key Applications

AI systems as trainers in Human-AI collaboration

Context: Training novices in organizational tasks without prior task-specific expert knowledge

Implementation: AI systems trained by subject matter experts to perform tasks and teach novices

Outcomes: Enhanced training efficiency and knowledge transfer from AI systems to novices

Challenges: Difficulty in transferring tacit knowledge and ensuring AI predictions are accurate and trustworthy

Implementation Barriers

Knowledge Transfer Barrier

Tacit knowledge is difficult to articulate and transfer compared to explicit knowledge.

Proposed Solutions: Utilizing AI systems to formalize and transfer tacit knowledge through explanations and guidance.

Trust and Dependency Barrier

Overreliance on AI systems can lead to reduced performance if the AI's predictions are inaccurate.

Proposed Solutions: Incorporating explainable AI (XAI) to improve understanding and trust in AI predictions.

Project Team

Philipp Spitzer

Researcher

Niklas Kühl

Researcher

Marc Goutier

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Philipp Spitzer, Niklas Kühl, Marc Goutier

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies