Skip to main content Skip to navigation

Generative AI: Implications and Applications for Education

Project Overview

This document examines the integration of generative AI, particularly chatbots powered by large language models (C-LLMs), within the educational landscape. It delves into the historical background, technological underpinnings, and inherent limitations of C-LLMs while exploring their potential to revolutionize educational practices. The research highlights the application of C-LLMs in AI-driven review and assessment of complex student work, showcasing both the advantages and challenges of this technology. The study further analyzes the effectiveness of AI feedback compared to traditional peer review methods, considering student perspectives. Ultimately, the paper proposes a recalibration approach to improve the reliability and value of C-LLMs in education by incorporating credible sources, established theories, and critical viewpoints, aiming to mitigate biases and enhance the learning experience.

Key Applications

AI-Powered Student Feedback and Assessment

Context: Masters and Doctoral students in Learning Design and Leadership, higher education students, trainee teachers, and young children. This includes online courses, storytelling, and academic writing contexts.

Implementation: Various GPT models (GPT-3, ChatGPT) are used to provide feedback and assess student work. This includes platforms like CGMap integrated with GPT models. The AI reviews are used to provide feedback on student's extended multimodal texts, simulate student discourse in dialogue with trainee teachers, and assess student work.

Outcomes: AI reviews offer more extensive feedback, are considered fast, straightforward, immediate, practical, satisfying, and instant. They can constructively support literacy development, and offer more detailed feedback that fluently and coherently summarizes students’ performance than human instructors. AI has a significant impact in teacher education.

Challenges: AI reviews can be too general, lacking the human touch. They may miss specific areas for improvement and may not be specific enough. AI reviews can have higher variability in results. Challenges include determining the originality of student work, as well as the unreliability of GPT detectors.

Implementation Barriers

Epistemological

C-LLMs are deeply harmful to a social understanding of knowledge and learning in that the machine buries its sources, cannot have a notion of empirical truth, cannot have a conception of a theoretical frame or disciplinary practice, cannot have explicit ethical frames, and cannot engage in critical dialogue.

Proposed Solutions: Recalibration of C-LLMs to include an Epistemic Frame, an Empirical Frame, and an Ontological Frame.

Accuracy/Reliability

GPT detectors are unreliable. The machine has no way of distinguishing fact from fake in its sources. The AI review "missed some smaller points" and "was not specific enough in terms of items that needed to be revised."

Proposed Solutions: Using the generative AI to offer students feedback on the basis of a theory of knowledge applicable to their learning. Require the learners to bring verifiable facts to the machine. Bring the theoretical frames of disciplines to the machine.

Quality of Feedback

The AI reviews were too general, lacking the human touch that the peer reviewers could provide, leading to difficulties in identifying specific areas for improvement. The AI review "missed some smaller points" and "was not specific enough in terms of items that needed to be revised."

Proposed Solutions: Prompt engineering. Recalibration of C-LLMs to include an Epistemic Frame, an Empirical Frame, and an Ontological Frame.

Bias and Ethics

C-LLMs depend on massive textual corpora, and the reality of human legacy text is that the sources are rife with racism, sexism, and homophobia, along with other ideologies and social orientations that are today unacceptable in mainstream public life. C-LLMs require extensive filtering.

Proposed Solutions: Human programmers create the filters to over-ride the "truth" of the source texts.

Project Team

Anastasia Olga

Researcher

Tzirides

Researcher

Akash Saini

Researcher

Gabriela Zapata

Researcher

Duane Searsmith

Researcher

Bill Cope

Researcher

Mary Kalantzis

Researcher

Vania Castro

Researcher

Theodora Kourkoulou

Researcher

John Jones

Researcher

Rodrigo Abrantes da Silva

Researcher

Jen Whiting

Researcher

Nikoleta Polyxeni Kastania

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Anastasia Olga, Tzirides, Akash Saini, Gabriela Zapata, Duane Searsmith, Bill Cope, Mary Kalantzis, Vania Castro, Theodora Kourkoulou, John Jones, Rodrigo Abrantes da Silva, Jen Whiting, Nikoleta Polyxeni Kastania

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gemini-2.0-flash-lite