Purpose: Explainability and fairness are two key factors for the effective and ethical clinical implementation of deep learning-based machine learning models in healthcare settings. However, there has been limited work on investigating how unfair performance manifests in explainable artificial intelligence (XAI) methods, and how XAI can be used to investigate potential reasons for unfairness. Thus, the aim of this work was to analyze the effects of previously established sociodemographic-related confounders on classifier performance and explainability methods.Approach: A convolutional neural network (CNN) was trained to predict biological sex from T1-weighted brain MRI datasets of 4547 9- to 10-year-old adolescents from the Adolescent Brain Cognitive Development study. Performance disparities of the trained CNN between White and Black subjects were analyzed and saliency maps were generated for each subgroup at the intersection of sex and race.Results: The classification model demonstrated a significant difference in the percentage of correctly classified White male (90.3 % ± 1.7 % ) and Black male (81.1 % ± 4.5 % ) children. Conversely, slightly higher performance was found for Black female (89.3 % ± 4.8 % ) compared with White female (86.5 % ± 2.0 % ) children. Saliency maps showed subgroup-specific differences, corresponding to brain regions previously associated with pubertal development. In line with this finding, average pubertal development scores of subjects used in this study were significantly different between Black and White females (p < 0.001) and males (p < 0.001).Conclusions: We demonstrate that a CNN with significantly different sex classification performance between Black and White adolescents can identify different important brain regions when comparing subgroup saliency maps. Importance scores vary substantially between subgroups within brain structures associated with pubertal development, a race-associated confounder for predicting sex. We illustrate that unfair models can produce different XAI results between subgroups and that these results may explain potential reasons for biased performance.
KEYWORDS: Data modeling, Brain, Neuroimaging, Performance modeling, Machine learning, Data centers, Magnetic resonance imaging, Solid modeling, Medical research, Feature extraction
Limited access to medical datasets, due to regulations that protect patient data, is a major hinderance to the development of machine learning models for computer-aided diagnosis tools using medical images. Distributed learning is an alternative to training machine learning models on centrally collected data that solves data sharing issues. The main idea of distributed learning is to train models remotely at each medical center rather than collecting the data in a central database, thereby avoiding sharing data between centers and model developers. In this work, we propose a travelling model that performs distributed learning for biological brain age prediction using morphological measurements of different brain structures. We specifically investigate the impact of nonidentically distributed data between collaborators on the performance of the travelling model. Our results, based on a large dataset of 2058 magnetic resonance imaging scans, demonstrate that transferring the model weights between the centers more frequently achieves results (mean age prediction error = 5.89 years) comparable to central learning implementations (mean age prediction error = 5.93 years), which were trained using the data from all sites hosted together at a central location. Moreover, we show that our model does not suffer from catastrophic forgetting, and that data distribution is less important than the number of times that the model travels between collaborators.
Attention deficit/hyperactivity disorder (ADHD) is characterized by symptoms of inattention, hyperactivity, and impulsivity, which affects an estimated 10.2% of children and adolescents in the United States. However, correct diagnosis of the condition can be challenging, with failure rates up to 20%. Machine learning models making use of magnetic resonance imaging (MRI) have the potential to serve as a clinical decision support system to aid in the diagnosis of ADHD in youth to improve diagnostic validity. The purpose of this study was to develop and evaluate an explainable deep learning model for automatic ADHD classification. 254 T1-weighted brain MRI datsets of youth aged 9-11 were obtained from the Adolescent Brain Cognitive Development (ABCD) Study, and the Child Behaviour Checklist DSM-Oriented ADHD Scale was used to partition subjects into ADHD and non-ADHD groups. A fully convolutional neural network (CNN) adapted from a state-of-the-art adult brain age regression model was trained to distinguish between the neurologically normal children and children with ADHD. Saliency voxel attribution maps were generated to identify brain regions relevant for the classification task. The proposed model achieved an accuracy of 71.1%, sensitivity of 68.4%, and specificity of 73.7%. Saliency maps highlighted the orbitofrontal cortex, entorhinal cortex, and amygdala as important regions for the classification, which is consistent with previous literature linking these regions to significant structural differences in youth with ADHD. To the best of our knowledge, this is the first study applying artiicial intelligence explainability methods such as saliency maps to the classification of ADHD using a deep learning model. The proposed deep learning classification model has the potential to aid clinical diagnosis of ADHD while providing interpretable results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.