Skip to main content Skip to navigation

BERT-Based Approach for Automating Course Articulation Matrix Construction with Explainable AI

Project Overview

The document explores the innovative application of generative AI in education, particularly through a BERT-based methodology designed to automate the creation of Course Articulation Matrices (CAM). This approach aims to enhance the alignment of Course Outcomes (CO), Program Outcomes (PO), and Program-Specific Outcomes (PSO), which is vital for ensuring curriculum coherence and overall educational effectiveness. By leveraging transfer learning with BERT models, the study showcases significant advancements in automating this alignment task, achieving high accuracy and interpretability, facilitated by Explainable AI techniques. The findings indicate that automating the construction of CAM not only saves valuable time for educators but also enhances the alignment processes within educational institutions, ultimately contributing to improved educational quality and coherence.

Key Applications

BERT-based models for automating Course Articulation Matrix (CAM) construction

Context: Higher education, particularly in engineering and technical disciplines, targeting faculty involved in curriculum design and assessment.

Implementation: Utilized transfer learning with pretrained BERT models to automate the alignment of COs with POs and PSOs, employing a scoring system for alignment assessment.

Outcomes: Achieved high accuracy (98.66%), precision (98.67%), recall (98.66%), and F1-score (98.66) in predicting alignment scores, enhancing curriculum mapping efficiency.

Challenges: Manual alignment processes are time-consuming, subjective, and prone to inconsistencies; challenges include the need for standardized datasets and improved model interpretability.

Implementation Barriers

Technical Barrier

The lack of standardized datasets and terminologies across institutions complicates the automation process.

Proposed Solutions: Implementing a synonym-based data augmentation method to enhance linguistic diversity and address class imbalance.

Interpretability Barrier

Models can be seen as black boxes, making it difficult for educators to understand automated decisions.

Proposed Solutions: Using Explainable AI techniques like LIME to provide insights into model predictions and enhance transparency.

Project Team

Natenaile Asmamaw Shiferaw

Researcher

Simpenzwe Honore Leandre

Researcher

Aman Sinha

Researcher

Dillip Rout

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Natenaile Asmamaw Shiferaw, Simpenzwe Honore Leandre, Aman Sinha, Dillip Rout

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies