Skip to main content Skip to navigation

PapagAI:Automated Feedback for Reflective Essays

Project Overview

The document presents 'PapagAI', an innovative open-source automated feedback tool aimed at enhancing the reflective practice of pre-service teachers by utilizing a hybrid AI system that integrates machine learning with symbolic components. This tool is designed to provide constructive feedback on reflective essays, thereby improving educational outcomes and reducing the feedback workload for instructors. The paper details the architecture and methodology of PapagAI, highlighting its benefits over conventional generative large language models (LLMs) in terms of tailored feedback and effectiveness. While the tool demonstrates significant potential for transforming feedback processes in education, the document also addresses its limitations, suggesting a balanced view on the implementation of generative AI in educational settings. Overall, PapagAI exemplifies how generative AI can be effectively applied in education to support both learners and educators through efficient feedback mechanisms.

Key Applications

PapagAI: Automated Feedback for Reflective Essays

Context: Pre-service teacher education in higher education, targeting teacher trainees.

Implementation: A hybrid AI system was developed that uses machine learning and symbolic components to analyze reflective essays and provide feedback based on didactic theory.

Outcomes: Improved learning outcomes for students, reduced feedback burden on instructors, and enhanced curriculum supervision.

Challenges: Limited accuracy of feedback, processing time issues, and variability of output compared to generative models.

Implementation Barriers

Technical Barrier

The models used in PapagAI do not achieve 100% accuracy, potentially leading to suboptimal feedback. Additionally, the output variability using PapagAI is much more limited than generative models.

Proposed Solutions: Fine-tuning of models with didactic literature, ongoing evaluation based on user data, and creating multiple variations of feedback prompts to improve output quality.

Processing Time Barrier

Processing time for feedback can be significantly higher for longer texts compared to single generative LLMs.

Proposed Solutions: Optimize the response time and consider user study feedback to improve response dynamics.

Project Team

Veronika Solopova

Researcher

Adrian Gruszczynski

Researcher

Eiad Rostom

Researcher

Fritz Cremer

Researcher

Sascha Witte

Researcher

Chengming Zhang

Researcher

Fernando Ramos López Lea Plößl

Researcher

Florian Hofmann

Researcher

Ralf Romeike

Researcher

Michaela Gläser-Zikuda

Researcher

Christoph Benzmüller

Researcher

Tim Landgraf

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Veronika Solopova, Adrian Gruszczynski, Eiad Rostom, Fritz Cremer, Sascha Witte, Chengming Zhang, Fernando Ramos López Lea Plößl, Florian Hofmann, Ralf Romeike, Michaela Gläser-Zikuda, Christoph Benzmüller, Tim Landgraf

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies