Neural ranking models are widely used to retrieve and rank relevant documents. However, these models may inherit and amplify biases present in the training data, posing challenges for fairness and relevance in ranking outputs. In this paper, we propose a novel curriculum-based training approach that manages bias exposure throughout the training process. We design a bias-aware curriculum that stages the exposure of the model to biased samples during the training stages, allowing the model to establish a fair relevance baseline. We conduct extensive experiments across different LLMs and datasets to evaluate the effectiveness of our approach. Our results demonstrate that our proposed strategy outperforms other bias reduction methods in terms of both fairness and relevance, without sacrificing retrieval effectiveness.
This work introduces bias-aware curriculum sampling for fair ranking systems.