Joseph Ross Mitchell, Konstantinos Kamnitsas, Kyle Singleton, Scott Whitmire, Kamala Clark-Swanson, Sara Ranjbar, Cassandra Rickertsen, Sandra Johnston, Kathleen Egan, Dana Rollison, John Arrington, Karl Krecke, Theodore Passe, Jared Verdoorn, Alex Nagelschneider, Carrie Carr, John Port, Alice Patton, Norbert Campeau, Greta Liebo, Laurence Eckel, Christopher Wood, Christopher Hunt, Prasanna Vibhute, Kent Nelson, Joseph Hoxworth, Ameet Patel, Brian Chong, Jeffrey Ross, Jerrold Boxerman, Michael Vogelbaum, Leland Hu, Ben Glocker, Kristin Swanson
Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations.
Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap).
Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively (p < 0.00007). The DL method achieved a mean Dice coefficient of 0.87 on the test cases.
Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its “human teachers” and produced output that was better, on average, than its training data.
We propose a Markov Random Field (MRF) formulation for the intensity-based N-view 2D-3D registration problem. The
transformation aligning the 3D volume to the 2D views is estimated by iterative updates obtained by discrete optimization
of the proposed MRF model. We employ a pairwise MRF model with a fully connected graph in which the nodes represent
the parameter updates and the edges encode the image similarity costs resulting from variations of the values of adjacent
nodes. A label space refinement strategy is employed to achieve sub-millimeter accuracy. The evaluation on real and
synthetic data and comparison to state-of-the-art method demonstrates the potential of our approach.
Recent technological advances in magnetic resonance imaging (MRI) lead to shorter acquisition times and consequently
make it an interesting whole-body imaging modality. The acquisition time can further be reduced by acquiring images
with a large field-of-view (FOV), making less scan stations necessary. Images with a large FOV are however disrupted by
severe geometric distortion artifacts, which become more pronounced closer to the boundaries. Also the current trend in
MRI, towards shorter and wider bore magnets, makes the images more prone to geometric distortion.
In a previous work,4 we proposed a method to correct for those artifacts using simultaneous deformable registration.
In the future, we would like to integrate previous knowledge about the distortion field into the process. For this purpose
we scan a specifically designed phantom consisting of small spheres arranged in a cube. In this article, we focus on the
automatic extraction of the centers of the spheres, wherein we are particularly interested, for the calculation of the distortion
field.
The extraction is not trivial because of the significant intensity inhomogeneity within the images. We propose to use
the local phase for the extraction purposes. The phase has the advantage that it provides structural information invariant
to intensity. We use the monogenic signal to calculate the phase. Subsequently, we once apply a Hough transform and
once a direct maxima search, to detect the centers. Moreover, we use a gradient and variance based approach for the radius
estimation. We performed our extraction on several phantom scans and obtained good results.
Nowadays, hepatic artery catheterizations are performed under live 2D X-ray fluoroscopy guidance, where the visualization of blood vessels requires the injection of contrast agent. The projection of a 3D static roadmap of the complex branches of the liver artery system onto 2D fluoroscopy images can aid catheter navigation and minimize the use of contrast agent. However, the presence of a significant hepatic motion due to patient's respiration necessitates a real-time
motion correction in order to align the projected vessels. The objective of our work is to introduce dynamic roadmaps into
clinical workflow for hepatic artery catheterizations and allow for continuous visualization of the vessels in 2D fluoroscopy
images without additional contrast injection. To this end, we propose a method for real-time estimation of the apparent displacement of the hepatic arteries in 2D flouroscopy images. Our approach approximates respiratory motion of hepatic arteries from the catheter motion in 2D fluoroscopy images. The proposed method consists of two main steps. First, a filtering is applied to 2D fluoroscopy images in order to enhance the catheter and reduce the noise level. Then, a part of the catheter is tracked in the filtered images using template matching. A dynamic template update strategy makes our method robust to deformations. The accuracy and robustness of the algorithm are demonstrated by experimental studies on 22 simulated and 4 clinical sequences containing 330 and 571 image frames, respectively.
Colon motility disorders are a very common problem. A precise diagnosis with current methods is almost unachievable. This makes it extremely difficult for the clinical experts to decide for the right intervention such as colon resection. The use of cine MRI for visualizing the colon motility is a very promising technique. In addition, if image segmentation and qualitative motion analysis provide the necessary tools, it could provide the appropriate diagnostic solution. In this work we defined necessary steps in the image processing workflow to gain valuable measurements for a computer aided diagnosis of colon motility disorders. For each step, we developed methods to deal with the dynamic image data. There is need for compensating the breathing motion since no respiratory gating could be used. We segment the colon using a graph cuts approach in 2D and 3D for further analysis and visualization. The analysis of the large bowel motility is done by tracking the extension of the colon during a propagating peristaltic wave. The main objective of this work is to extract a motion model to define a clinical index that can be used in diagnosis of large bowel motility dysfunction. We aim at the classification and localization of such pathologies.
Conference Committee Involvement (8)
Image Processing
15 February 2021 | Online Only, California, United States
Image Processing
17 February 2020 | Houston, Texas, United States
Image Processing
19 February 2019 | San Diego, California, United States
Image Processing
11 February 2018 | Houston, Texas, United States
Image Processing Posters
12 February 2017 | Orlando, FL, United States
Image Processing
12 February 2017 | Orlando, Florida, United States
Image Processing
1 March 2016 | San Diego, California, United States
Image Processing
24 February 2015 | Orlando, Florida, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.