Skip to main content Skip to navigation

Ratas framework: A comprehensive genai-based approach to rubric-based marking of real-world textual exams

Project Overview

The document explores the implementation of RATAS, a generative AI framework designed for rubric-based automated grading in educational settings. RATAS enhances grading efficiency and consistency by leveraging advanced AI models to evaluate open-ended questions, which have traditionally posed challenges due to their subjective nature. By addressing issues such as limited generalizability and lack of explainability in conventional grading methods, RATAS offers structured feedback and interpretable scoring that benefits both educators and students. Its effectiveness is demonstrated through evaluations against real-world datasets, revealing high reliability and accuracy in grading a variety of student responses. Overall, the use of generative AI in this context showcases a promising advancement in educational assessment, with the potential to streamline grading processes while maintaining fairness and clarity in evaluating student performance.

Key Applications

RATAS (Rubric Automated Tree-based Answer Scoring)

Context: Automated grading of open-ended questions in educational settings for university-level project-based courses.

Implementation: RATAS uses a rubric-based scoring system and integrates large language models (LLM) like GPT-4o to process responses and generate scores and feedback.

Outcomes: High grading accuracy and reliability, the ability to handle longer responses, and structured feedback provided to students and instructors.

Challenges: Complexity of grading rubrics, integration of diverse exam formats, and the need for substantial training data to achieve high performance.

Implementation Barriers

Technical barrier

Existing automated grading systems often lack adaptability across different subjects and grading criteria, requiring retraining for each new exam.

Proposed Solutions: RATAS proposes a subject-agnostic framework capable of handling diverse grading rubrics without the need for extensive retraining.

Data availability barrier

Scarcity of labeled datasets for training and evaluating automated grading systems, especially for open-ended responses. The authors created a unique, contextualized dataset from university-level courses to rigorously evaluate RATAS.

Proposed Solutions: Developing unique datasets can help address the scarcity issue.

Project Team

Masoud Safilian

Researcher

Amin Beheshti

Researcher

Stephen Elbourn

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Masoud Safilian, Amin Beheshti, Stephen Elbourn

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies