Fairness-Aware Mixture of Experts with Interpretability Budgets
Published in International Conference on Discovery Science (DS 2023), 2023
Recommended citation: Germino, J., Moniz, N., Chawla, N.V. (2023). Fairness-Aware Mixture of Experts with Interpretability Budgets. In: Bifet, A., Lorena, A.C., Ribeiro, R.P., Gama, J., Abreu, P.H. (eds) Discovery Science. DS 2023. Lecture Notes in Computer Science(), vol 14276. Springer, Cham. https://doi.org/10.1007/978-3-031-45275-8_23 https://doi.org/10.1007/978-3-031-45275-8_23
Abstract: As artificial intelligence becomes more pervasive, explainability and the need to interpret machine learning models’ behavior emerge as critical issues. Discussions are usually bounded by those who defend that interpretable models must be the rule or that non-interpretable models’ ability to capture more complex patterns warrants their use. In this paper, we argue that interpretability should not be viewed as a binary aspect and that, instead, it should be viewed as a continuous domain-informed notion. With this aim, we leverage the well-known Mixture of Experts architecture with user-defined budgets for the controlled use of non-interpretable models. We extend this idea with a counterfactual fairness module to ensure the selection of consistently fair experts: FairMOE. We compare our proposal to contemporary approaches in fairness-related data sets and demonstrate that FairMOE is competitive with the state-of-the-art methods when considering the trade-off between predictive performance and fairness while providing competitive scalability and, most importantly, greater interpretability.
Germino, J., Moniz, N., Chawla, N.V. (2023). Fairness-Aware Mixture of Experts with Interpretability Budgets. In: Bifet, A., Lorena, A.C., Ribeiro, R.P., Gama, J., Abreu, P.H. (eds) Discovery Science. DS 2023. Lecture Notes in Computer Science(), vol 14276. Springer, Cham. https://doi.org/10.1007/978-3-031-45275-8_23