Skip to main content Skip to navigation

Generative AI: Implications and Applications for Education

Project Overview

The document explores the role of generative AI, particularly chatbots powered by large language models (C-LLMs), in educational settings, highlighting their significant applications and implications. It emphasizes the constructive potentials of generative AI, which include enhancing student learning through detailed feedback and supporting literacy development. However, it also raises important concerns regarding academic integrity and the reliability of AI-generated content, pointing out the disruptive challenges these technologies can pose in traditional educational frameworks. The need for careful integration of AI into educational practices is underscored, acknowledging both its transformative benefits and its limitations in producing consistently reliable knowledge. Overall, the document calls for a balanced approach to leverage the advantages of generative AI while addressing the associated risks to ensure effective and ethical use in education.

Key Applications

CGMap application within the CGScholar platform

Context: Graduate-level education, specifically for masters and doctoral students in Learning Design and Leadership at the University of Illinois, focusing on assessment of complex student work.

Implementation: Students submitted extended written texts which were reviewed by peers, instructors, and an AI tool (CGMap) that provided feedback based on a rubric.

Outcomes: AI provided more detailed feedback than human instructors, showing high agreement in assessment and supporting literacy development.

Challenges: Concerns about AI-generated feedback being vague, general, and lacking the depth of human feedback; issues with academic integrity and reliability of AI outputs.

Implementation Barriers

Academic Integrity

The ability of generative AI to produce high-quality written work raises concerns about student plagiarism and authenticity of student submissions.

Proposed Solutions: Implementing rigorously proctored assessments and encouraging handwritten submissions, although these methods have their own limitations.

Quality of Feedback

AI feedback can be too general or lack context, which can lead to unclear guidance for students on how to improve their work.

Proposed Solutions: Combining AI feedback with human reviews to provide a more comprehensive evaluation that leverages the strengths of both approaches.

Project Team

Anastasia Olga

Researcher

Tzirides

Researcher

Akash Saini

Researcher

Gabriela Zapata

Researcher

Duane Searsmith

Researcher

Bill Cope

Researcher

Mary Kalantzis

Researcher

Vania Castro

Researcher

Theodora Kourkoulou

Researcher

John Jones

Researcher

Rodrigo Abrantes da Silva

Researcher

Jen Whiting

Researcher

Nikoleta Polyxeni Kastania

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Anastasia Olga, Tzirides, Akash Saini, Gabriela Zapata, Duane Searsmith, Bill Cope, Mary Kalantzis, Vania Castro, Theodora Kourkoulou, John Jones, Rodrigo Abrantes da Silva, Jen Whiting, Nikoleta Polyxeni Kastania

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies