Publications

FairMOE: counterfactually-fair mixture of experts with levels of interpretability

Published in Machine Learning, 2024

In this paper, we extend our previous work arguing that interpretability should not be viewed as a binary aspect and that, instead, it should be viewed as a continuous domain-informed notion. Building on our prior work, we leverage the well-known Mixture of Experts architecture with a counterfactual fairness module to ensure the selection of consistently fair experts: FairMOE. We expand on the previous paper with a detailed analysis on the assignment of predictions to gain more insights into the strengths and weaknesses of the individual experts.

Recommended citation: Germino, Joe, Nuno Moniz, and Nitesh V. Chawla. "FairMOE: counterfactually-fair mixture of experts with levels of interpretability." Machine Learning (2024): 1-21. https://doi.org/10.1007/978-3-031-45275-8_23

A Community Focused Approach Towards Making Healthy and Affordable Daily Diet Recommendations

Published in Frontiers in Big Data, 6, 2023

The goal of this paper is to demonstrate how the integration of data from local grocery stores and federal government databases can be used to assist specific communities in meeting their unique health and budget challenges.

Recommended citation: Germino, J., Szymanski, A., Metoyer, R., & Chawla, N. V. A Community Focused Approach Towards Making Healthy and Affordable Daily Diet Recommendations. Frontiers in Big Data, 6, 1086212. https://doi.org/10.3389/fdata.2023.1086212

Fairness-Aware Mixture of Experts with Interpretability Budgets

Published in International Conference on Discovery Science (DS 2023), 2023

In this paper, we argue that interpretability should not be viewed as a binary aspect and that, instead, it should be viewed as a continuous domain-informed notion. With this aim, we leverage the well-known Mixture of Experts architecture with user-defined budgets for the controlled use of non-interpretable models. We extend this idea with a counterfactual fairness module to ensure the selection of consistently fair experts: FairMOE.

Recommended citation: Germino, J., Moniz, N., Chawla, N.V. (2023). Fairness-Aware Mixture of Experts with Interpretability Budgets. In: Bifet, A., Lorena, A.C., Ribeiro, R.P., Gama, J., Abreu, P.H. (eds) Discovery Science. DS 2023. Lecture Notes in Computer Science(), vol 14276. Springer, Cham. https://doi.org/10.1007/978-3-031-45275-8_23 https://doi.org/10.1007/978-3-031-45275-8_23