Bias-Aware Curriculum Sampling For Fair Ranking

Abstract

Neural ranking models are widely used to retrieve and rank relevant documents. However, these models may inherit and amplify biases present in the training data, posing challenges for fairness and relevance in ranking outputs. In this paper, we propose a novel curriculum-based training approach that manages bias exposure throughout the training process. We design a bias-aware curriculum that stages the exposure of the model to biased samples during the training stages, allowing the model to establish a fair relevance baseline. We conduct extensive experiments across different LLMs and datasets to evaluate the effectiveness of our approach. Our results demonstrate that our proposed strategy outperforms other bias reduction methods in terms of both fairness and relevance, without sacrificing retrieval effectiveness.

Publication
Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval

This work introduces bias-aware curriculum sampling for fair ranking systems.

Shirin Seyedsalehi
PhD Student (Alumni)

Shirin Seyedsalehi is a former PhD student and alumna of the Human-Centered Machine Intelligence Lab.

Hai Son Le
Hai Son Le
Master’s Student

Hai Son Le is a Master’s student in the Human-Centered Machine Intelligence Lab, working on research projects in machine learning and data science.

Morteza Zihayat
Morteza Zihayat
Principal Investigator

Dr. Morteza Zihayat is a Canada Research Chair (CRC) in Human-Centered AI and Associate Professor at Toronto Metropolitan University, Faculty of Engineering and Architectural Science. He also holds appointments as Adjunct Associate Professor at the University of Waterloo (Management Sciences) and IBM Faculty Fellow at IBM Centre for Advanced Studies. He is the Director of the Human-Centered Machine Intelligence Lab.