Magnetic resonance spectroscopy (MRS) is one of the few non-invasive imaging modalities capable of making neurochemical and metabolic measurements in vivo. Traditionally, the clinical utility of MRS has been narrow. The most common use has been the "single-voxel spectroscopy" variant to discern the presence of a lactate peak in the spectra in one location in the brain, typically to evaluate for ischemia in neonates. Thus, the reduction of rich spectral data to a binary variable has not classically necessitated much signal processing. However, scanners have become more powerful and MRS sequences more advanced, increasing data complexity and adding 2 to 3 spatial dimensions in addition to the spectral one. The result is a spatially- and spectrally-variant MRS image ripe for image processing innovation. Despite this potential, the logistics for robustly accessing and manipulating MRS data across different scanners, data formats, and software standards remain unclear. Thus, as research into MRS advances, there is a clear need to better characterize its image processing considerations to facilitate innovation from scientists and engineers. Building on established neuroimaging standards, we describe a framework for manipulating these images that generalizes to the voxel, spectral, and metabolite level across space and multiple imaging sites while integrating with LCModel, a widely used quantitative MRS peak-fitting platform. In doing so, we provide examples to demonstrate the advantages of such a workflow in relation to recent publications and with new data. Overall, we hope our characterizations will lower the barrier of entry to MRS processing for neuroimaging researchers.
7T magnetic resonance imaging (MRI) has the potential to drive our understanding of human brain function through new contrast and enhanced resolution. Whole brain segmentation is a key neuroimaging technique that allows for region-by-region analysis of the brain. Segmentation is also an important preliminary step that provides spatial and volumetric information for running other neuroimaging pipelines. Spatially localized atlas network tiles (SLANT) is a popular 3D convolutional neural network (CNN) tool that breaks the whole brain segmentation task into localized sub-tasks. Each subtask involves a specific spatial location handled by an independent 3D convolutional network to provide high resolution whole brain segmentation results. SLANT has been widely used to generate whole brain segmentations from structural scans acquired on 3T MRI. However, the use of SLANT for whole brain segmentation from structural 7T MRI scans has not been successful due to the inhomogeneous image contrast usually seen across the brain in 7T MRI. For instance, we demonstrate the mean percent difference of SLANT label volumes between a 3T scan-rescan is approximately 1.73%, whereas its 3T-7T scan-rescan counterpart has higher differences around 15.13%. Our approach to address this problem is to register the whole brain segmentation performed on 3T MRI to 7T MRI and use this information to finetune SLANT for structural 7T MRI. With the finetuned SLANT pipeline, we observe a lower mean relative difference in the label volumes of ~8.43% acquired from structural 7T MRI data. Dice similarity coefficient between SLANT segmentation on the 3T MRI scan and the after finetuning SLANT segmentation on the 7T MRI increased from 0.79 to 0.83 with p<0.01. These results suggest finetuning of SLANT is a viable solution for improving whole brain segmentation on high resolution 7T structural imaging.
Batch size is a key hyperparameter in training deep learning models. Conventional wisdom suggests larger batches produce improved model performance. Here we present evidence to the contrary, particularly when using autoencoders to derive meaningful latent spaces from data with spatially global similarities and local differences, such as electronic health records (EHR) and medical imaging. We investigate batch size effects in both EHR data from the Baltimore Longitudinal Study of Aging and medical imaging data from the multimodal brain tumor segmentation (BraTS) challenge. We train fully connected and convolutional autoencoders to compress the EHR and imaging input spaces, respectively, into 32- dimensional latent spaces via reconstruction losses for various batch sizes between 1 and 100. Under the same hyperparameter configurations, smaller batches improve loss performance for both datasets. Additionally, latent spaces derived by autoencoders with smaller batches capture more biologically meaningful information. Qualitatively, we visualize 2-dimensional projections of the latent spaces and find that with smaller batchesthe EHR network better separates the sex of the individuals, and the imaging network better captures the right-left laterality of tumors. Quantitatively, the analogous sex classification and laterality regressions using the latent spaces demonstrate statistically significant improvements in performance at smaller batch sizes. Finally, we find improved individual variation locally in visualizations of representative data reconstructions at lower batch sizes. Taken together, these results suggest that smaller batch sizes should be considered when designing autoencoders to extract meaningful latent spaces among EHR and medical imaging data driven by global similarities and local variation.
7T MRI provides unprecedented resolution for examining human brain anatomy in vivo. For example, 7T MRI enables deep thickness measurement of laminar subdivisions in the right fusiform area. Existing laminar thickness measurement on 7T is labor intensive, and error prone since the visual inspection of the image is typically along one of the three orthogonal planes (axial, coronal, or sagittal view). To overcome this, we propose a new analytics tool that allows flexible quantification of cortical thickness on a 2D plane that is orthogonal to the cortical surface (beyond axial, coronal, and sagittal views) based on the 3D computational surface reconstruction. The proposed method further distinguishes high quality 2D planes and the low-quality ones by automatically inspecting the angles between the surface normals and slice direction. In our approach, we acquired a pair of 3T and 7T scans (same subject). We extracted the brain surfaces from the 3T scan using MaCRUISE and projected the surface to the 7T scan’s space. After computing the angles between the surface normals and axial direction vector, we found that 18.58% of surface points were angled at more than 80 with the axial direction vector and had 2D axial planes with visually distinguishable cortical layers. 15.12% of the surface points with normal vectors angled at 30 or lesser with the axial direction, had poor 2D axial slices for visual inspection of the cortical layers. This effort promises to dramatically extend the area of cortex that can be quantified with ultra-high resolution in-plane imaging methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.