Skip to main content Skip to navigation

Calendar


Show all calendar items

Giulio Morina

- Export as iCalendar
Location: Stats common room (MSB 1.02)

Auditing and achieving intersectional fairness in classification problems

Surprise: this talk is not about the Bernoulli Factory (T_T)! I will be speaking about the paper I have been working on during my internship at QuantumBlack and downloadable at https://arxiv.org/abs/1911.01468 (joint work with Viktoriia Oliinyk, Julian Waton, Ines Marusic, Konstantinos Georgatzis). Here the paper abstract:

Machine learning algorithms are extensively used to make increasingly more consequential decisions, so that achieving optimal predictive performance can no longer be the only focus. This paper explores intersectional fairness, that is fairness when intersections of multiple sensitive attributes -- such as race, age, nationality, etc. -- are considered. Previous research has mainly been focusing on fairness with respect to a single sensitive attribute, with intersectional fairness being comparatively less studied despite its critical importance for modern machine learning applications. We introduce intersectional fairness metrics by extending prior work, and provide different methodologies to audit discrimination in a given dataset or model outputs. Secondly, we develop novel post-processing techniques to mitigate any detected bias in a classification model. Our proposed methodology does not rely on any assumptions regarding the underlying model and aims at guaranteeing fairness while preserving good predictive performance. Finally, we give guidance on a practical implementation, showing how the proposed methods perform on a real-world dataset.

Show all calendar items