Nice review of our paper, with Philipp Ratz and François Hu, on montrealethics.ai. Ask a group of people which biases in machine learning should be reduced, and you are likely to be showered with suggestions, making it difficult to decide where to start. To enable an objective discussion, we study a way to sequentially get rid of biases and propose a tool that can efficiently analyze the effects that the order of correction has on outcomes. A Sequentially Fair Mechanism for Multiple Sensitive …