Epicardial Adipose Tissue (EAT) volume has been associated with risk of cardiovascular events, but manual annotation is time-consuming and only performed on gated Computed Tomography (CT). We developed a Deep Learning (DL) model to segment EAT from gated and ungated CT, then evaluated the association between EAT volume and death or Myocardial Infarction (MI). We included 7712 patients from three sites, two with ungated CT and one using gated CT. Of those, 500 patients from one site with ungated CT were used for model training and validation and 3,701 patients from the remaining two sites were used for external testing. Threshold for abnormal EAT volume (⪆144mL) was derived in the internal population based on Youden’s index. DL EAT measurements were obtained in ⪅2 seconds compared to approximately 15 minutes for expert annotations. Excellent Spearman correlation between DL and expert reader on an external subset of N=100 gated (0.94, p⪅0.001) and N=100 ungated (0.91, p⪅0.001). During median follow-up of 3.1 years (IQR 2.1 – 4.0), 306(8.3%) patients experienced death or MI in the external testing populations. Elevated EAT volume was associated with an increased risk of death or MI for gated (hazard ratio [HR] 1.72, 95% CI 1.11-2.67) and ungated CT (HR 1.57, 95% CI 1.20 – 2.07), with no significant difference in risk (interaction p-value 0.692). EAT volume measurements provide similar risk stratification from gated and ungated CT. These measurements could be obtained on chest CT performed for a large variety of indications, potentially improving risk stratification.
Coronary artery calcium (CAC) scores are a well-established marker of the extent of coronary atherosclerosis. We aimed to compare state-of-the-art vision transformer for medical image segmentation with convolutional long short-term memory (convLSTM) networks for automatic CAC quantification with external validation.
KEYWORDS: Transformers, Heart, Education and training, Computed tomography, Atherosclerosis, Angiography, Network architectures, Deep learning, Medicine, Medical research
Background: compare the performance of 2 novel deep learning networks—convolutional long short-term memory and transformer network—for artificial intelligence-based quantification of plaque volume and stenosis severity from CCTA. Methods: This was an international multicenter study of patients undergoing CCTA at 11 sites. The deep learning (DL) convolutional neural networks were trained to segment coronary plaque in 921 patients (5,045 lesions). The training dataset was further split temporally into training (80%) and internal validation (20%) datasets. The primary DL architecture was a hierarchical convolutional long short- term memory (ConvLSTM) network. This was compared against a TransUNet network, which combines the abilities of Vision Transformer with U-Net, enabling the capture of in-depth localization information while modeling long-range dependencies. Following training and internal validation, the both DL networks were applied to an external validation cohort of 162 patients (1,468 lesions) from the SCOT-HEART trial. Results: In the external validation cohort, agreement between DL and expert reader measurements was stronger when using the ConvLSTM network than with TransUNet, for both per-lesion total plaque volume (ICC 0·953 vs 0.830) and percent diameter stenosis (ICC 0·882 vs 0.735; both p<0.001). The ConvLSTM network showed higher per-cross-section overlap with expert reader segmentations (as measured by the Dice coefficient) compared to TransUnet, for vessel wall (0.947 vs 0.946), lumen (0.93 vs 0.92), and calcified plaque (0.87 vs 0.86; p<0.0001 for all), with similar execution times. Conclusions: In a direct comparison with external validation, the ConvLSTM network yielded higher agreement with expert readers for quantification of total plaque volume and stenosis severity compared to TransUnet, with faster execution times.
KEYWORDS: Positron emission tomography, Single photon emission computed tomography, Machine learning, Data modeling, Deep learning, Detection and tracking algorithms
Cardiac PET, less common than SPECT, is rapidly growing and offers the additional benefit of first-pass absolute myocardial blood flow measurements. However, multicenter cardiac PET databases are not well established. We used multicenter SPECT data to improve PET cardiac risk stratification via a deep learning knowledge transfer mechanism.
Contrast computed tomography angiography (CTA) is utilized in wide variety of applications ranging from clinical practices to emerging technologies. However, radiation exposure, the necessity of contrast administration, as well as the overall complexity of the acquisition are major limitations. We aimed to generate pseudo-contrast CTA, utilizing a conditional generative adversarial network (cGAN). We synthesize realistic contrast CTA from a perfectly registered non-contrast thin slice computed tomography (NCCT). Our method may substitute contrast CTA with a pseudo-contrast CTA for certain clinical applications such as the assessments of cardiac anatomy.
Purpose: Quantitative lung measures derived from computed tomography (CT) have been demonstrated to improve prognostication in coronavirus disease 2019 (COVID-19) patients but are not part of clinical routine because the required manual segmentation of lung lesions is prohibitively time consuming. We aim to automatically segment ground-glass opacities and high opacities (comprising consolidation and pleural effusion).Approach: We propose a new fully automated deep-learning framework for fast multi-class segmentation of lung lesions in COVID-19 pneumonia from both contrast and non-contrast CT images using convolutional long short-term memory (ConvLSTM) networks. Utilizing the expert annotations, model training was performed using five-fold cross-validation to segment COVID-19 lesions. The performance of the method was evaluated on CT datasets from 197 patients with a positive reverse transcription polymerase chain reaction test result for SARS-CoV-2, 68 unseen test cases, and 695 independent controls.Results: Strong agreement between expert manual and automatic segmentation was obtained for lung lesions with a Dice score of 0.89 ± 0.07; excellent correlations of 0.93 and 0.98 for ground-glass opacity (GGO) and high opacity volumes, respectively, were obtained. In the external testing set of 68 patients, we observed a Dice score of 0.89 ± 0.06 as well as excellent correlations of 0.99 and 0.98 for GGO and high opacity volumes, respectively. Computations for a CT scan comprising 120 slices were performed under 3 s on a computer equipped with an NVIDIA TITAN RTX GPU. Diagnostically, the automated quantification of the lung burden % discriminate COVID-19 patients from controls with an area under the receiver operating curve of 0.96 (0.95–0.98).Conclusions: Our method allows for the rapid fully automated quantitative measurement of the pneumonia burden from CT, which can be used to rapidly assess the severity of COVID-19 pneumonia on chest CT.
Background: Coronary computed tomography angiography (CCTA) allows non-invasive assessment of luminal stenosis and coronary atherosclerotic plaque. We aimed to develop and externally validate an artificial intelligence-based deep learning (DL) network for CCTA-based measures of plaque volume and stenosis severity. Methods: This was an international multicenter study of 1,183 patients undergoing CCTA at 11 sites. A novel DL convolutional neural network was trained to segment coronary plaque in 921 patients (5,045 lesions). The DL architecture consisted of a novel hierarchical convolutional long short-term memory (ConvLSTM) Network. The training set was further split temporally into training (80%) and internal validation (20%) datasets. Each coronary lesion was assessed in a 3D slab about the vessel centrelines. Following training and internal validation, the model was applied to an independent test set of 262 patients (1,469 lesions), which included an external validation cohort of 162 patients Results: In the test set, there was excellent agreement between DL and clinician expert reader measurements of total plaque volume (intraclass correlation coefficient [ICC] 0.964) and percent diameter stenosis (ICC 0.879; both p<0.001, see tables and figure). The average per-patient DL plaque analysis time was 5.7 seconds versus 25-30 minutes taken by experts. There was significantly higher overlap measured by the Dice coefficient (DC) for ConvLSTM compared to UNet (DC for vessel 0.94 vs 0.83, p<0.0001; DC for lumen and plaque 0.90 vs 0.83, p<0.0001) or DeepLabv3 (DC for vessel both 0.94; DC for lumen and plaque 0.89 vs 0.84, p<0.0001). Conclusions: A novel externally validated artificial intelligence-based network provides rapid measurements of plaque volume and stenosis severity from CCTA which agree closely with clinician expert readers.
We aimed to develop a novel deep-learning based method for automatic coronary artery calcium (CAC) quantification in low-dose ungated computed tomography attenuation correction maps (CTAC). In this study, we used convolutional long-short -term memory deep neural network (conv-LSTM) to automatically derive coronary artery calcium score (CAC) from both standard CAC scans and low-dose ungated scans (CT-attenuation correction maps). We trained convLSTM to segment CAC using 9543 scans. A U-Net model was trained as a reference method. Both models were validated in the OrCaCs dataset (n=32) and in the held-out cohort (n=507) without prior coronary interventions who had CTAC standard CAC scan acquired contemporarily. Cohen’s kappa coefficients and concordance matrices were used to assess agreement in four CAC score categories (very low: <10, low:10-100; moderate:101-400 and high <400). The median time to derive results on a central processing unit (CPU) was significantly shorter for the conv-LSTM model- 6.18s (inter quartile range [IQR]: 5.99, 6.3) than for UNet (10.1s, IQR: 9.82, 15.9s, p<0.0001). The memory consumption during training was much lower for our model (13.11Gb) in comparison with UNet (22.31 Gb). Conv-LSTM performed comparably to UNet in terms of agreement with expert annotations, but with significantly shorter inference times and lower memory consumption
e propose a fast and robust multi-class deep learning framework for segmenting COVID-19 lesions: Ground Glass opacities and High opacities (including consolidations and pleural effusion), from non-contrast CT scans using convolutional Long Short-Term Memory network for self-attention. Our method allows rapid quantification of pneumonia burden from CT with performance equivalent to expert readers. The mean dice score across 5 folds was 0.8776 with a standard deviation of 0.0095. A low standard deviation between results from each fold indicate the models were trained equally good regardless of the training fold. The cumulative per-patient mean dice score (0.8775±0.075) for N=167 patients, after concatenation, is consistent with the results from each of the 5 folds. We obtained excellent Pearson correlation (expert vs. automatic) of 0.9396 (p<0.0001) and 0.9843 (p<0.0001) between ground-glass opacity and high opacity volumes, respectively. Our model outperforms Unet2d (p<0.05) and Unet3d (p<0.05) in segmenting high opacities, has comparable performance with Unet2d in segmenting ground-glass opacities, and significantly outperforms Unet3d (p<0.0001) in segmenting ground-glass opacities. Our model performs faster on CPU and GPU when compared to Unet2d and Unet3d. For same number of input slices, our model consumed 0.83x and 0.26x the memory consumed by Unet2d and Unet3d.
Background: Coronary computed tomography angiography (CTA) allows quantification of stenosis. However, such quantitative analysis is not part of clinical routine. We evaluated the feasibility of utilizing deep learning for quantifying coronary artery disease from CTA. Methods: A total of 716 diseased segments in 156 patients (66 ± 10 years) who underwent CTA were analyzed. Minimal luminal area (MLA), percent diameter stenosis (DS), and percent contrast density difference (CDD) were measured using semi-automated software (Autoplaque) by an expert reader. Using the expert annotations, deep learning was performed with convolutional neural networks using 10-fold cross-validation to segment CTA lumen and calcified plaque. MLA, DS and CDD computed using deep-learning-based approach was compared to expert reader measurements. Results: There was excellent correlation between the expert reader and deep learning for all quantitative measures (r=0.984 for MLA; r=0.957 for DS; and r=0.975 for CDD, p<0.001 for all). The expert reader and deep learning method was not significantly different for MLA (median 4.3 mm2 for both, p=0.68) and CDD (11.6 vs 11.1%, p=0.30), and was significantly different for DS (26.0 vs 26.6%, p<0.05); however, the ranges of all the quantitative measures were within inter-observer variability between 2 expert readers. Conclusions: Our deep learning-based method allows quantitative measurement of coronary artery disease segments accurately from CTA and may enhance clinical reporting.
Mathieu Rubeaux, Nikhil Joshi, Marc Dweck, Alison Fletcher, Manish Motwani, Louise Thomson, Guido Germano, Damini Dey, Daniel Berman, David Newby, Piotr Slomka
Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.
CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.
Pericardial fat volume (PFV) is emerging as an important parameter for cardiovascular risk stratification. We propose a hybrid approach for automated PFV quantification from water/fat-resolved whole-heart noncontrast coronary magnetic resonance angiography (MRA). Ten coronary MRA datasets were acquired. Image reconstruction and phase-based water-fat separation were conducted offline. Our proposed algorithm first roughly segments the heart region on the original image using a simplified atlas-based segmentation with four cases in the atlas. To get exact boundaries of pericardial fat, a three-dimensional graph-based segmentation is used to generate fat and nonfat components on the fat-only image. The algorithm then selects the components that represent pericardial fat. We validated the quantification results on the remaining six subjects and compared them with manual quantifications by an expert reader. The PFV quantified by our algorithm was 62.78±27.85 cm3, compared to 58.66±27.05 cm3 by the expert reader, which were not significantly different (p=0.47) and showed excellent correlation (R=0.89,p<0.01). The mean absolute difference in PFV between the algorithm and the expert reader was 9.9±8.2 cm3. The mean value of the paired differences was −4.13 cm3 (95% confidence interval: −14.47 to 6.21). The mean Dice coefficient of pericardial fat voxels was 0.82±0.06. Our approach may potentially be applied in a clinical setting, allowing for accurate magnetic resonance imaging (MRI)-based PFV quantification without tedious manual tracing.
Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.
Visual identification of coronary arterial lesion from three-dimensional coronary computed tomography angiography (CTA) remains challenging. We aimed to develop a robust automated algorithm for computer detection of coronary artery lesions by machine learning techniques. A structured learning technique is proposed to detect all coronary arterial lesions with stenosis ≥25%. Our algorithm consists of two stages: (1) two independent base decisions indicating the existence of lesions in each arterial segment and (b) the final decision made by combining the base decisions. One of the base decisions is the support vector machine (SVM) based learning algorithm, which divides each artery into small volume patches and integrates several quantitative geometric and shape features for arterial lesions in each small volume patch by SVM algorithm. The other base decision is the formula-based analytic method. The final decision in the first stage applies SVM-based decision fusion to combine the two base decisions in the second stage. The proposed algorithm was applied to 42 CTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis ≥25%. Visual identification of lesions with stenosis ≥25% by three expert readers, using consensus reading, was considered as a reference standard. Our method performed with high sensitivity (93%), specificity (95%), and accuracy (94%), with receiver operator characteristic area under the curve of 0.94. The proposed algorithm shows promising results in the automated detection of obstructive and nonobstructive lesions from CTA.
Epicardial fat volume (EFV) is now regarded as a significant imaging biomarker for cardiovascular risk strat-ification. Manual or semi-automated quantification of EFV includes tedious and careful contour drawing of pericardium on fine image features. We aimed to develop and validate a fully-automated, accurate algorithm for EVF quantification from non-contrast CT using active contours and multiple atlases registration. This is a knowledge-based model that can segment both the heart and pericardium accurately by initializing the location and shape of the heart in large scale from multiple co-registered atlases and locking itself onto the pericardium actively. The deformation process is driven by pericardium detection, extracting only the white contours repre- senting the pericardium in the CT images. Following this step, we can calculate fat volume within this region (epicardial fat) using standard fat attenuation range. We validate our algorithm on CT datasets from 15 patients who underwent routine assessment of coronary calcium. Epicardial fat volume quantified by the algorithm (69.15 ± 8.25 cm3) and the expert (69.46 ± 8.80 cm3) showed excellent correlation (r = 0.96, p < 0.0001) with no significant differences by comparison of individual data points (p = 0.9). The algorithm achieved a Dice overlap of 0.93 (range 0.88 - 0.95). The total time was less than 60 sec on a standard windows computer. Our results show that fast accurate automated knowledge-based quantification of epicardial fat volume from non-contrast CT is feasible. To our knowledge, this is also the first fully automated algorithms reported for this task.
KEYWORDS: 3D modeling, Image segmentation, Data modeling, Motion models, Statistical modeling, Principal component analysis, 3D image processing, Shape analysis, Cardiovascular magnetic resonance imaging, Magnetic resonance imaging
In this work we automatically segment the left ventricle (LV) in cardiac MR images in the end-diastole (ED) and end-systole (ES) phases using a novel approach that combines statistical and deterministic deformable models. A 3D Active Appearance Model (AAM) is used to segment the ED phase. The AAM texture model is trained on radial samples from gradient magnitude images to make the fitting process faster and more discriminative. A trained ED-to-ES shape correspondence model is used to map a given ED shape to an ES shape. Once the AAM model converges to a shape in ED, the correspondence model is used to get an approximate ES shape. We segment the LV in the ES phase by first fitting a deformable superquadric to the AAM converged shape (in ED) using data range forces and then tracking the LV using image and data range forces (for the ES shape obtained from correspondence model). We test our approach by performing leave-one-out training on a 35 patient datasets. The data comprises 19 normal patients and 16 patients having heart abnormalities (cardiomyopathy and myocardial infarction). The composition makes it a challenging data collection with significant shape variation. The performance of our method is evaluated by measuring the mismatch between automatically segmented and expert delineated contours using the Mean Perpendicular Distance (MPD) and Dice metrics. The average MPD is 2.6mm for ED and 3.7mm for ES (error mostly towards the apex and base). The average Dice is 0.9 for ED and 0.8 for ES. These results show good potential for clinical use.
Our aim in this study was to optimize and validate an adaptive denoising algorithm based on Block-Matching 3D, for
reducing image noise and improving assessment of left ventricular function from low-radiation dose coronary CTA. In
this paper, we describe the denoising algorithm and its validation, with low-radiation dose coronary CTA datasets from 7
consecutive patients. We validated the algorithm using a novel method, with the myocardial mass from the low-noise
cardiac phase as a reference standard, and objective measurement of image noise. After denoising, the myocardial mass
were not statistically different by comparison of individual datapoints by the students' t-test (130.9±31.3g in low-noise
70% phase vs 142.1±48.8g in the denoised 40% phase, p= 0.23). Image noise improved significantly between the 40%
phase and the denoised 40% phase by the students' t-test, both in the blood pool (p <0.0001) and myocardium (p
<0.0001). In conclusion, we optimized and validated an adaptive BM3D denoising algorithm for coronary CTA. This
new method reduces image noise and has the potential for improving assessment of left ventricular function from low-dose
coronary CTA.
KEYWORDS: Image segmentation, Heart, Magnetic resonance imaging, Computed tomography, Single photon emission computed tomography, 3D modeling, 3D image processing, Visualization, Expectation maximization algorithms, Medicine
Computer-aided segmentation of cardiac images obtained by various modalities plays an important role and is a prerequisite for a wide range of cardiac applications by facilitating the delineation of anatomical regions of interest. Numerous computerized methods have been developed to tackle this problem. Recent studies employ sophisticated techniques using available cues from cardiac anatomy such as geometry, visual appearance, and prior knowledge. In addition, new minimization and computational methods have been adopted with improved computational speed and robustness. We provide an overview of cardiac segmentation techniques, with a goal of providing useful advice and references. In addition, we describe important clinical applications, imaging modalities, and validation methods used for cardiac segmentation.
Changes in myocardial function signatures such as wall motion and thickening are typically computed separately from
myocardial perfusion SPECT (MPS) stress and rest studies to assess for stress-induced function abnormalities. The
standard approach may suffer from the variability in contour placements and image orientation when subtle changes
between stress and rest scans in motion and thickening are being evaluated. We have developed a new measure of
regional change of function signature (motion and thickening) computed directly from registered stress and rest gated
MPS data. In our novel approach, endocardial surfaces at the end-diastolic and end-systolic frames for stress and rest
studies were registered by matching ventricular surfaces. Furthermore, we propose a new global registration method
based on finding the optimal rotation for myocardial best ellipsoid fit to minimize the indexing disparities between two
surfaces between stress and rest studies. Myocardial stress-rest function changes were computed and normal limits of
change were determined as the mean and standard deviation of the training set for each polar sample. Normal limits were
utilized to quantify the stress-rest function change for each polar map sample and the accumulated quantified function
signature values were used for abnormality assessments in territorial regions. To evaluate the effectiveness of our novel
method, we examined the agreements of our results against visual scores for motion change on vessel territorial regions
obtained by human experts on a test group with 623 cases and were able to show that our detection method has a
improved sensitivity on per vessel territory basis, compared to those obtained by human experts utilizing gated MPS
data.
Visual analysis of three-dimensional (3D) Coronary Computed Tomography Angiography (CCTA) remains challenging
due to large number of image slices and tortuous character of the vessels. We aimed to develop an accurate, automated
algorithm for detection of significant and subtle coronary artery lesions compared to expert interpretation. Our
knowledge-based automated algorithm consists of centerline extraction which also classifies 3 main coronary arteries
and small branches in each main coronary artery, vessel linearization, lumen segmentation with scan-specific lumen
attenuation ranges, and lesion location detection. Presence and location of lesions are identified using a multi-pass
algorithm which considers expected or "normal" vessel tapering and luminal stenosis from the segmented vessel.
Expected luminal diameter is derived from the scan by automated piecewise least squares line fitting over proximal and
mid segments (67%) of the coronary artery, considering small branch locations. We applied this algorithm to 21 CCTA
patient datasets, acquired with dual-source CT, where 7 datasets had 17 lesions with stenosis greater than or equal to
25%. The reference standard was provided by visual and quantitative identification of lesions with any ≥25% stenosis by
an experienced expert reader. Our algorithm identified 16 out of the 17 lesions confirmed by the expert. There were 16
additional lesions detected (average 0.13/segment); 6 out of 16 of these were actual lesions with <25% stenosis. On persegment
basis, sensitivity was 94%, specificity was 86% and accuracy was 87%. Our algorithm shows promising results
in the high sensitivity detection and localization of significant and subtle CCTA arterial lesions.
Transient ischemic dilation (TID) of the left ventricle measured by myocardial perfusion Single Photon Emission
Computed Tomography (SPECT) and defined as a the ratio of stress myocardial blood volume to rest myocardial
blood volume has been shown to be highly specific for detection of severe coronary artery disease. This work
investigates automated quantification of TID from cardiac Computed Tomography (CT) perfusion images. To
date, TID has not been computed from CT. Previous studies to compute TID have assumed accurate segmentation
of the left ventricle and performed subsequent analysis of volume change mainly on static or less often
on gated myocardial perfusion SPECT. This, however, may limit the accuracy of TID due to potential errors
from segmentation, perfusion defects or volume measurement from both images. In this study, we propose to
use registration methods to determine TID from cardiac CT scans where deformation field within the structure
of interest is used to measure the local volume change between stress and rest. Promising results have been
demonstrated with 7 datasets, showing the potential of this approach as a comparative method for measuring
TID.
Automated segmentation of the 3D heart region from non-contrast CT is a pre-requisite for automated quantification of
coronary calcium and pericardial fat. We aimed to develop and validate an automated, efficient atlas-based algorithm for
segmentation of the heart and pericardium from non-contrast CT.
A co-registered non-contrast CT atlas is first created from multiple manually segmented non-contrast CT data. Noncontrast
CT data included in the atlas are co-registered to each other using iterative affine registration, followed by a
deformable transformation using the iterative demons algorithm; the final transformation is also applied to the segmented
masks. New CT datasets are segmented by first co-registering to an atlas image, and by voxel classification using a
weighted decision function applied to all co-registered/pre-segmented atlas images. This automated segmentation
method was applied to 12 CT datasets, with a co-registered atlas created from 8 datasets. Algorithm performance was
compared to expert manual quantification.
Cardiac region volume quantified by the algorithm (609.0 ± 39.8 cc) and the expert (624.4 ± 38.4 cc) were not
significantly different (p=0.1, mean percent difference 3.8 ± 3.0%) and showed excellent correlation (r=0.98, p<0.0001).
The algorithm achieved a mean voxel overlap of 0.89 (range 0.86-0.91). The total time was <45 sec on a standard
windows computer (100 iterations). Fast robust automated atlas-based segmentation of the heart and pericardium from
non-contrast CT is feasible.
KEYWORDS: Image segmentation, Cardiovascular magnetic resonance imaging, Data modeling, Medical imaging, Magnetic resonance imaging, Heart, Principal component analysis, Cardiology, Statistical modeling, Image quality
Automated image segmentation has been playing a critical role in medical image analysis. Recentl, Level Set
methods have shown an efficacy and efficiency in various imaging modalities. In this paper, we present a novel
segmentation approach to jointly delineate the boundaries of epi- and endocardium of the left ventricle on the
Magnetic Resonance Imaging (MRI) images in a variational framework using level sets, which is in great demand
as a clinical application in cardiology. One strategy to tackle segmentation under undesirable conditions such as
subtle boundaries and occlusions is to exploit prior knowledge which is specific to the object to segment, in this
case the knowledge about heart anatomy. While most left ventricle segmentation approaches incorporate a shape
prior obtained by a training process from an ensemble of examples, we exploit a novel shape constraint using an
implicit shape prior knowledge, which assumes shape similarity between epi- and endocardium allowing a
variation under the Gaussian distribution. Our approach does not demand a training procedure which is usually
subject to the training examples and is also laborious and time-consuming in generating the shape prior. Instead,
we model a shape constraint by a statistical distance between the shape of epi- and endocardium employing signed
distance functions. We applied this technique to cardiac MRI data with quantitative evaluations performed on 10
subjects. The experimental results show the robustness and effectiveness of our shape constraint within a
Mumford-Shah segmentation model in the segmentation of left ventricle from cardiac MRI images in comparison
with the manual segmentation results.
Coronary CT angiography (CTA) with multi-slice helical scanners is becoming the integral part of major
diagnostic pathways for coronary artery disease. In addition, coronary CTA has demonstrated substantial potential
in quantitative coronary plaque characterization. If serial comparisons of plaque progression or regression are to be
made, accurate 3D volume registration of these volumes would be particularly useful. In this work, we propose a
coronary CTA volume registration of the paired coronary CTA scans using feature-based non-rigid volume
registration. We achieve this with a combined registration strategy, which uses the global rigid registration as an
initialization, followed by local registration using non-rigid volume registration with a volume preserving
constraint. We exploit the extracted coronary trees to help localize and emphasize the region of interest as
unnecessary regions hinder registration process, which results in wrong registration result. The extracted binary
masks of each coronary tree may not be the same due to initial segmentation errors, which could lead to subsequent
bias in the registration process. Therefore we utilize a blur mask which is generated by convolving the Gaussian
function with the binary coronary tree mask to include the neighboring vessel region into account. A volume
preserving constraint is imposed so that the total volume of the binary mask before and after co-registration
remains constant. To validate the proposed method, we perform experiments with data from 3 patients with
available serial CT scans (6 scans in total) and measure the distance of anatomical landmarks between the
registered serial scans of the same patient.
KEYWORDS: Magnetic resonance imaging, 3D metrology, Visualization, Image segmentation, 3D modeling, Motion models, Magnetism, 3D image processing, Visual analytics, Cardiovascular magnetic resonance imaging
The aim of our work is to present a robust 3D automated method for measuring regional myocardial thickening using cardiac magnetic resonance imaging (MRI) based on Laplace's equation. Multiple slices of the myocardium in short-axis orientation at end-diastolic and end-systolic phases were considered for this analysis. Automatically assigned 3D epicardial and endocardial boundaries were fitted
to short-axis and long axis slices corrected for breathold related misregistration, and final boundaries were edited by a cardiologist if required. Myocardial thickness was quantified at the two cardiac phases by computing the distances between the myocardial boundaries over the entire volume using Laplace's equation. The distance between the surfaces was found by computing normalized gradients that form a
vector field. The vector fields represent tangent vectors along field lines connecting both boundaries. 3D thickening measurements were transformed into polar map representation and 17-segment model
(American Heart Association) regional thickening values were derived. The thickening results were then compared with standard 17-segment 6-point visual scoring of wall motion/wall thickening (0=normal;
5=greatest abnormality) performed by a consensus of two experienced imaging cardiologists. Preliminary results on eight subjects indicated a strong negative correlation (r=-0.8, p<0.0001) between the average thickening obtained using Laplace and the summed segmental visual scores. Additionally, quantitative
ejection fraction measurements also correlated well with average thickening scores (r=0.72, p<0.0001). For
segmental analysis, we obtained an overall correlation of -0.55 (p<0.0001) with higher agreement along the
mid and apical regions (r=-0.6). In conclusion 3D Laplace transform can be used to quantify myocardial thickening in 3D.
We have implemented two hardware accelerated Thin Plate Spline (TPS) warping algorithms. The first algorithm is a hardware-software approach (HW-TPS) that uses OpenGL Vertex Shaders to perform a grid warp. The second is a Graphics Processor based approach (GPU-TPS) that uses the OpenGL Shading Language to perform all warping calculations on the GPU. Comparison with a software TPS algorithm was used to gauge the speed and quality of both hardware algorithms. Quality was analyzed visually and using the Sum of Absolute Difference (SAD) similarity metric. Warping was performed using 92 user-defined displacement vectors for 512x512x173 serial lung CT studies, matching normal-breathing and deep-inspiration scans. On a Xeon 2.2 Ghz machine with an ATI Radeon 9800XT GPU the GPU-TPS required 26.1 seconds to perform a per-voxel warp compared to 148.2 seconds for the software algorithm. The HW-TPS needed 1.63 seconds to warp the same study while the GPU-TPS required 1.94 seconds and the software grid transform required 22.8 seconds. The SAD values calculated between the outputs of each algorithm and the target CT volume were 15.2%, 15.4% and 15.5% for the HW-TPS, GPU-TPS and both software algorithms respectively. The computing power of ubiquitous 3D graphics cards can be exploited in medical image processing to provide order of magnitude acceleration of nonlinear warping algorithms without sacrificing output quality.
Interactive multimodality 4D volume rendering of cardiac images is challenging due to several factors. Animated rendering of fused volumes with multiple lookup tables (LUT) and interactive adjustments of relative volume positions and orientations must be performed in real time. In addition it is difficult to visualize the myocardium separated from the surrounding tissue on some modalities, such as MRI. In this work we propose to use software techniques combined with hardware capabilities of modern consumer video cards for real-time visualization of time-varying multimodality fused cardiac volumes for diagnostic purposes.
An automatic registration technique for gated cardiac SPECT to gated MRI is presented. Prior to registration, the MRI data set is subjected to a preprocessing technique that automatically isolates the heart. During preprocessing, voxels in the MRI volume are designated as either dynamic or static based on their change in intensity over the course of the cardiac cycle. This allows the elimination of the external organs in the MRI dataset, leaving the heart as the main feature of the volume. To separate the left ventricle (LV) from the remainder of the heart, optimal thresholding is used. A mutual-information-based algorithm is used to register the two studies. The registration technique was tested with fourteen patient data sets, and the results were compared to those of manual registration by an expert. The pre-processing step significantly improved the accuracy of the registration when compared to automatic registration performed without pre-processing.
The constrained, localized warping (CLW) algorithm was developed to minimize the registration errors caused by hypoperfusion lesions. SPECT brain perfusion images from 21 Alzheimer patients and 35 controls were analyzed. CLW automatically determines homologous landmarks on patient and template images. CLW was constrained by anatomy and where lesions were probable. CLW was compared with 3rd-degree, polynomial warping (AIR 3.0). Accuracy was assessed by correlation, overlap, and variance. 16 lesion types were simulated, repeated with 5 images. The errors in defect volume and intensity after registration were estimated by comparing the images resulting from warping transforms calculated when the defects were or were not present. Registration accuracy of normal studies was very similar between CLW and polynomial warping methods, and showed marked improvement over linear registration. The lesions had minimal effect on the CLW algorithm accuracy, with small errors in volume (> -4%) and intensity (< +2%). The accuracy improvement compared with not warping was nearly constant regardless of defect: +1.5% overlap and +0.001 correlation. Polynomial warping caused larger errors in defect volume (< -10%) and intensity (> +2.5%) for most defects. CLW is recommended because it caused small errors in defect estimation and improved the registration accuracy in all cases.
We present an operator-independent software technique for segmentation, realignment and analysis of brain perfusion images, with both voxel-wise and regional quantitation methods. Inter-subject registration with normalized mutual information was tested with simulated defects. Brain perfusion images (HMPAO-SPECT) from 56 subjects (21 AD; 35 controls) were retrospectively analyzed. Templates were created from the 3-D registration of the controls. Automatic segmentation was developed to remove extraneous activity that disrupts registration. Two new registration methods, robust least squares (RLS) and normalized mutual information (NMI) were implemented and compared with sum of absolute differences (CD). The automatic segmentation method caused a registration displacement of 0.4 +/- 0.3 pixels compared with manual segmentation. NMI registration proved to be less adversely effected by simulated defects than RLS or CD. The error in quantitating the patient-template parietal ratio due to mis- registration was 2.0% and 0.5% for 70% and 85% hypoperfusion defects, respectively. The registration processing time was 1.6 min (233 MHz Pentium). The most accurate discriminant utilized a logistic equation parameterized by mean counts of the parietal and temporal regions of the map, (91 +/- 8% Se, 97 +/- 5% Sp). BRASS is a fast, objective software package for single-step analysis of brain SPECT, suitable to aid diagnosis of AD.
KEYWORDS: Image registration, Ultrasonography, 3D image processing, Doppler effect, Visualization, Arteries, Magnetic resonance angiography, Computer simulations, 3D displays, 3D acquisition
To allow a more objective interpretation of 3D carotid bifurcation images, we have implemented and evaluated on patient data, automated volume registration of 3D magnetic resonance angiography (MRA), 3D power Doppler (PD) ultrasound, and 3D B-mode ultrasound. Our algorithm maximizes the mutual information between the thresholded intensities of the MRA and PD images. The B-mode images, acquired simultaneously in the same orientation as PD, are registered to the MRA using the transformation obtained from the MRA-PD registration. To test the algorithm we misaligned clinical ultrasound images and simulated mismatches between the datasets due to different appearances of diseased vessels by removing 3D sections of voxels from each of the paired scans. All registrations were assessed visually using integrated 3D volume, surface, and 2D slice display. 97% of images misaligned within a range of 40 degrees and 40 pixels were correctly registered. The deviation from the mean registration parameters due to the simulated defects was 1.6 +/- 2.5 degrees, 1.5 +/- 1.6 pixels in X, Y and 0.7 +/- 0.7 pixels in Z direction. The algorithm can be used to register carotid images with misalignment range of 40 pixels in X, Y directions, 10 pixels in Z direction and 40 degree rotations, even in the case of different image appearances due to vessel stenoses.
In medical imaging practice, images and reports often need be reviewed and edited from many locations. We have designed and implemented a Java-based Remote Viewing and Reporting System (JaRRViS) for a nuclear medicine department, which is deployed as a web service, at the fraction of the cost dedicated PACS systems. The system can be extended to other imaging modalities. JaRRViS interfaces to the clinical patient databases of imaging workstations. Specialized nuclear medicine applets support interactive displays of data such as 3-D gated SPECT with all the necessary options such as cine, filtering, dynamic lookup tables, and reorientation. The reporting module is implemented as a separate applet using Java Foundation Classes (JFC) Swing Editor Kit and allows composition of multimedia reports after selection and annotation of appropriate images. The reports are stored on the server in the HTML format. JaRRViS uses Java Servlets for the preparation and storage of final reports. The http links to the reports or to the patient's raw images with applets can be obtained from JaRRViS by any Hospital Information System (HIS) via standard queries. Such links can be sent via e-mail or included as text fields in any HIS database, providing direct access to the patient reports and images via standard web browsers.
We developed a novel clinical tool (PERFIT) for automated 3-D voxel-based quantification of myocardial perfusion, validated it with a wide spectrum of angiographically correlated cases, compared it to previous approaches, and tested its agreement with visual expert reading. A multistage, 3-D iterative inter- subject registration of patient images to normal stress and rest cardiac templates was applied, including automated masking of external activity before final fit. The reference templates were adjusted to the individual left ventricles by template erosion, for further shape correction. 125 angiographically correlated cases including multi-vessel disease, infarction, and dilated ventricles were tested. In addition, standard polar maps were generated automatically from the registered data. Results of consensus visual reading (V) and PERFIT (P) were compared. The iterative fitting was successful in 245/250 (99%) stress and rest images. PERFIT found defects on stress in 2/29 normal patients and 95/96 abnormal patients. Overall correlation between V and P findings was r equals 0.864. In all abnormal groups (n equals 96), PERFIT average defect sizes expressed as the percentage the myocardial volume were 9.6% for rest and 22.3% for stress, versus 11.4% (rest) and 23% (stress) for visual reading. Automatic quantification by PERFIT is consistent with visual analysis; it can be applied to the analysis whole spectrum of clinical images, and can aid physicians in interpretation of myocardial perfusion.
A major limitation of the use of endoscopes in minimally invasive surgery is the lack of relative context between the endoscope and its surroundings. The purpose of this work is to map endoscopic images to surfaces obtained from 3D preoperative MR or CT data, for assistance in surgical planning and guidance. To test our methods, we acquired pre- operative CT images of a standard brain phantom from which object surfaces were extracted. Endoscopic images were acquired using a neuro-endoscope tracked with an optical tracking system, and the optical properties of the endoscope were characterized using a simple calibration procedure. Registration of the phantom and CT images was accomplished using markers that could be identified both on the physical object and in the pre-operative images. The endoscopic images were rectified for radial lens distortion, and then mapped onto the extracted surfaces via a ray-traced texture- mapping algorithm, which explicitly accounts for surface obliquity. The optical tracker has an accuracy of about 0.3 mm, which allows the endoscope tip to be localized to within mm. The mapping operation allows the endoscopic images to be effectively 'painted' onto the surfaces as they are acquired. Panoramic and stereoscopic visualization and navigation of the painted surfaces may then be reformed from arbitrary orientations, that were not necessarily those from which the original endoscopic views were acquired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.