Enhancing Fairness in Disease Prediction by Optimizing Multiple Domain Adversarial Networks
Bin Li, PhD student, Computer and Information Sciences, Temple University Xinghua Shi, PhD, Computer and Information Sciences, Temple University Hongchang Gao, PhD, Computer and Information Sciences, Temple University Xiaoqian Jiang, PhD, School of Biomedical Informatics, UTHealth Houston Kai Zhang, PhD, School of Biomedical Informatics, UTHealth Houston Arif O Harmanci, School of Biomedical Informatics, UTHealth Houston Bradley Malin, PhD, Department of Biomedical Informatics, Vanderbilt University
Poster # 67
In the realm of health informatics, predictive models are playing an increasingly important role. It is critical to ensure these models deliver equitable and reliable outcomes across diverse populations is of paramount importance, but this is a very challenging task. Biases in medical predictions can exacerbate disparities, emphasizing the urgency for robust techniques to rectify these challenges. In this context, we introduce the Multiple Domain Adversarial Neural Network (MDANN), a novel framework designed to enhance fairness by concurrently mitigating biases inherent in multiple sensitive features. In this framework, features denote individual traits (e.g., race, gender, age) that should not influence predictions or decisions unfairly. Our MDANN framework is underpinned by three core contributions: Deep Data Representations: We extract deep representations of input data from the embedding layer of a pre-trained convolutional autoencoder (CAE). These enriched, higher-dimensional representations improve both prediction accuracy and fairness in disease prediction tasks. AUC-induced Minimax Loss Function: We adopt an area under the receiver operating characteristic curve (AUC)-induced minimax loss function beyond conventional accuracy-induced losses that are limited in addressing the equitable prediction outcome when labels are highly imbalanced. Empirical evidence suggests that this approach excels in handling imbalanced data, enhancing classification performance and fairness, especially for minority groups. Adversarial Module for Bias Mitigation: Our MDANN framework incorporates an adversarial module, trying to achieve robust adversarial learning by using negative gradients back-propagating across multiple sensitive features. This structure demonstrates its efficacy in mitigating biases and enhancing fairness by addressing biases across various sensitive features simultaneously, as opposed to traditional approaches that may tackle biases in a sequential or isolated manner. Through empirical evaluations, MDANN outperforms state-of-the-art techniques, particularly in predicting disease progression using brain imaging data for Alzheimer's Disease and Autism populations. These results underscore the potential of MDANN as an effective method for fairness enhancement in disease prediction, especially when multiple sensitive attributes are considered.