Skip to main content Skip to navigation

VIVID: Human-AI Collaborative Authoring of Vicarious Dialogues from Lecture Videos

Project Overview

The document explores the application of generative AI in education, particularly through a system called VIVID, which transforms traditional monologue-style online lectures into interactive, dialogue-based formats that foster greater learner engagement and cognitive activity. By leveraging large language models (LLMs), VIVID facilitates the collaborative creation of educational dialogues tailored for vicarious learners, who gain insights from observing interactions rather than participating directly. Workshops and studies demonstrated that VIVID significantly enhances the dynamism and academic productivity of educational dialogues, promoting critical thinking through spontaneous questioning. The findings highlight the potential of generative AI to redefine learning experiences by encouraging deeper engagement and collaboration between humans and AI in the educational content creation process. Overall, the integration of such AI systems aims to create immersive learning environments that cater to diverse learner needs and improve educational outcomes.

Key Applications

VIVID: Human-AI Collaborative Authoring of Vicarious Dialogues from Lecture Videos

Context: Online education involving dialogue-based learning materials created from lecture videos in both computer science and physics subjects, targeting instructors and students in these fields.

Implementation: Instructors collaborate with LLMs through a three-stage process: Initial Generation, Compare and Selection, and Refinement. This includes utilizing transcripts from lectures to generate dialogue data, supporting bilingual scripts (e.g., Korean and English).

Outcomes: Instructors report increased efficiency in authoring high-quality dialogues, leading to improved engagement and understanding for learners through the generation of dialogues that clarify difficult concepts.

Challenges: Challenges include ensuring the pedagogical quality of generated dialogues, the need for instructor involvement to refine the content, and potential difficulties in understanding complex topics for vicarious learners.

Implementation Barriers

Usability Barrier

Instructors found the features of VIVID to lack sufficient explainability, leading to underutilization. Additionally, instructors experienced low control over the generated dialogues, leading to dissatisfaction with the outcomes.

Proposed Solutions: Enhancing the explainability of features and providing better guidance on their use could improve instructor engagement. Providing fine-grained control options for customization in the dialogue generation process could enhance usability.

Content Quality Barrier

Generated dialogues were sometimes verbose and not adequately tailored to specific learning contexts.

Proposed Solutions: Implementing stricter guidelines on dialogue length and relevance based on instructor feedback could help mitigate verbosity.

Cognitive Barrier

Learners may struggle to understand complex concepts presented in dialogues generated by AI.

Proposed Solutions: Provide additional support and clarification for challenging topics in the generated dialogues.

Project Team

Seulgi Choi

Researcher

Hyewon Lee

Researcher

Yoonjoo Lee

Researcher

Juho Kim

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Seulgi Choi, Hyewon Lee, Yoonjoo Lee, Juho Kim

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies