Skip to main content Skip to navigation

FAIREDU: A Multiple Regression-Based Method for Enhancing Fairness in Machine Learning Models for Educational Applications

Project Overview

The document discusses the integration of generative AI in education, highlighting its transformative potential and key applications. One significant focus is FAIREDU, an innovative approach aimed at enhancing fairness in machine learning models utilized within educational contexts. This method specifically addresses fairness across multiple sensitive attributes, including gender, race, and age, demonstrating superior performance compared to existing fairness-enhancing techniques while sustaining model efficacy. The findings underscore the critical need for comprehensive assessments of fairness across diverse sensitive features, revealing challenges related to balancing fairness and model performance in educational datasets. Overall, the document illustrates how generative AI can be harnessed to create more equitable educational experiences, while also acknowledging the complexities involved in implementing these advanced technologies responsibly and effectively.

Key Applications

FAIREDU - A multiple regression-based method for enhancing fairness

Context: Educational applications involving machine learning models that impact diverse groups

Implementation: FAIREDU detects dependencies between sensitive and non-sensitive features using multivariate regression, removing biases before model training.

Outcomes: Improved fairness across multiple sensitive features without significantly compromising model performance.

Challenges: The method may not capture non-linear relationships inherent in datasets and may overlook other aspects of fairness.

Implementation Barriers

Technical barrier

FAIREDU relies on linear regression, which may not effectively address non-linear relationships in data, potentially leaving biases unaddressed.

Proposed Solutions: Future research should explore the use of non-linear methods and composite sensitive features.

External validity barrier

The datasets used for evaluation may not represent the full diversity of real-world educational environments, limiting generalizability.

Proposed Solutions: Testing FAIREDU across a broader range of datasets and educational contexts.

Construct validity barrier

Complex interactions between intersectional identities may not be fully captured by FAIREDU.

Proposed Solutions: Future work should address the model's ability to handle a broader array of sensitive features and interactions.

Project Team

Nga Pham

Researcher

Minh Kha Do

Researcher

Tran Vu Dai

Researcher

Pham Ngoc Hung

Researcher

Anh Nguyen-Duc

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Nga Pham, Minh Kha Do, Tran Vu Dai, Pham Ngoc Hung, Anh Nguyen-Duc

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies