Medical image registration aims to identify the spatial deformation between images of the same anatomical region and is fundamental to image-based diagnostics and therapy. To date, the majority of the deep learning-based registration methods employ regularizers that enforce global spatial smoothness, e.g., the diffusion regularizer. However, such regularizers are not tailored to the data and might not be capable of reflecting the complex underlying deformation. In contrast, physics-inspired regularizers promote physically plausible deformations. One such regularizer is the linear elastic regularizer, which models the deformation of elastic material. These regularizers are driven by parameters that define the material’s physical properties. For biological tissue, a wide range of estimations of such parameters can be found in the literature, and it remains an open challenge to identify suitable parameter values for successful registration. To overcome this problem and to incorporate physical properties into learning-based registration, we propose to use a hypernetwork that learns the effect of the physical parameters of a physics-inspired regularizer on the resulting spatial deformation field. In particular, we adapt the HyperMorph framework to learn the effect of the two elasticity parameters of the linear elastic regularizer. Our approach enables the efficient discovery of suitable, data-specific physical parameters at test time. To the best of our knowledge, we are the first to use a hypernetwork to learn physics-inspired regularization for medical image registration. We evaluate our approach on 3D intrapatient lung CT images. The results show that the linear elastic regularizer can yield comparable results to the diffusion regularizer in unsupervised learning-based registration while predicting deformations with fewer foldings. With our method, the adaptation of the physical parameters to the data can successfully be performed at test time. Our code is available at https://github.com/annareithmeir/elastic-regularization-hypermorph.
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
In the case of lung cancer, an assessment of regional lung function has the potential to guide more accurate radiotherapy treatment. This could spare well-functioning parts of the lungs, as well as be used for follow up. In this paper we present a novel approach for regional lung ventilation estimation from dynamic lung CT imaging, which might be used during radiotherapy planning. Our method combines a supervoxel-based image representation with deformable image registration, performed between peak breathing phases, for which we track changes in intensity of previously extracted supervoxels. Such a region-oriented approach is expected to be more physiologically consistent with lung anatomy than previous methods relying on voxel-wise relationships, as it has the potential to mimic the lung anatomy. Our results are compared with static ventilation images acquired from hyperpolarized Xenon129 MRI. In our study we use three patient datasets consisting of 4DCT and XeMRI. We achieve higher correlation (0.487) compared to the commonly used method for estimating ventilation performed in a voxel-wise manner (0.423) on average based on global correlation coefficients. We also achieve higher correlation values for our method when ventilated/non-ventilated regions of lungs are investigated. The increase of the number of layers of supervoxels further improves our results, with one layer achieving 0.393, compared to 0.487 for 15 layers. Overall, we have shown that our method achieves higher correlation values compared to the previously used approach, when correlated with XeMRI.
We propose a novel mediastinal lymph node detection and segmentation method from chest CT volumes based on fully convolutional networks (FCNs). Most lymph node detection methods are based on filters for blob-like structures, which are not specific for lymph nodes. The 3D U-Net is a recent example of the state-of-the-art 3D FCNs. The 3D U-Net can be trained to learn appearances of lymph nodes in order to output lymph node likelihood maps on input CT volumes. However, it is prone to oversegmentation of each lymph node due to the strong data imbalance between lymph nodes and the remaining part of the CT volumes. To moderate the balance of sizes between the target classes, we train the 3D U-Net using not only lymph node annotations but also other anatomical structures (lungs, airways, aortic arches, and pulmonary arteries) that can be extracted robustly in an automated fashion. We applied the proposed method to 45 cases of contrast-enhanced chest CT volumes. Experimental results showed that 95.5% of lymph nodes were detected with 16.3 false positives per CT volume. The segmentation results showed that the proposed method can prevent oversegmentation, achieving an average Dice score of 52.3 ± 23.1%, compared to the baseline method with 49.2 ± 23.8%, respectively.
This paper presents a local intensity structure analysis based on an intensity targeted radial structure tensor (ITRST) and the blob-like structure enhancement filter based on it (ITRST filter) for the mediastinal lymph node detection algorithm from chest computed tomography (CT) volumes. Although the filter based on radial structure tensor analysis (RST filter) based on conventional RST analysis can be utilized to detect lymph nodes, some lymph nodes adjacent to regions with extremely high or low intensities cannot be detected. Therefore, we propose the ITRST filter, which integrates the prior knowledge on detection target intensity range into the RST filter. Our lymph node detection algorithm consists of two steps: (1) obtaining candidate regions using the ITRST filter and (2) removing false positives (FPs) using the support vector machine classifier. We evaluated lymph node detection performance of the ITRST filter on 47 contrast-enhanced chest CT volumes and compared it with the RST and Hessian filters. The detection rate of the ITRST filter was 84.2% with 9.1 FPs/volume for lymph nodes whose short axis was at least 10 mm, which outperformed the RST and Hessian filters.
We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model “sliding motion.” Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.
In this paper, we propose a novel supervoxel segmentation method designed for mediastinal lymph node by embedding Hessian-based feature extraction. Starting from a popular supervoxel segmentation method, SLIC, which computes supervoxels by minimising differences of intensity and distance, we overcome this method's limitation of merging neighboring regions with similar intensity by introducing Hessian-based feature analysis into the supervoxel formation. We call this structure-oriented voxel clustering, which allows more accurate division into distinct regions having blob-, line- or sheet-like structures. This way, different tissue types in chest CT volumes can be segmented individually, even if neighboring tissues have similar intensity or are of non- spherical extent. We demonstrate the performance of the Hessian-assisted supervoxel technique by applying it to mediastinal lymph node detection in 47 chest CT volumes, resulting in false positive reductions from lymph node candidate regions. 89 % of lymph nodes whose short axis is at least 10 mm could be detected with 5.9 false positives per case using our method, compared to our previous method having 83 % of detection rate with 6.4 false positives per case.
Dynamic contrast-enhanced MRI is a dynamic imaging technique that is now widely used for cancer imaging. Changes in tumour microvasculature are typically quantified by pharmacokinetic modelling of the contrast
uptake curves. Reliable pharmacokinetic parameter estimation depends on the measurement of the arterial input function, which can be obtained from arterial blood sampling, or extracted from the image data directly.
However, arterial blood sampling poses additional risks to the patient, and extracting the input function from
MR intensities is not reliable. In this work, we propose to compute a perfusion CT based arterial input function,
which is then employed for dynamic contrast enhanced MRI pharmacokinetic parameter estimation. Here, parameter estimation is performed simultaneously with intra-sequence motion correction by using nonlinear image
registration. Ktrans maps obtained with this approach were compared with those obtained using a population
averaged arterial input function, i.e. Orton. The dataset comprised 5 rectal cancer patients, who had been
imaged with both perfusion CT and dynamic contrast enhanced MRI, before and after the administration of a
radiosensitising drug. Ktrans distributions pre and post therapy were computed using both the perfusion CT and
the Orton arterial input function. Perfusion CT derived arterial input functions can be used for pharmacokinetic
modelling of dynamic contrast enhanced MRI data, when perfusion CT images of the same patients are available.
Compared to the Orton model, perfusion CT functions have the potential to give a more accurate separation
between responders and non-responders.
Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes.
The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements,
which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as
well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated
expectation-maximisation image registration framework that incorporates temporal tracer kinetic information
to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain
phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate
the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations
using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of
our motion correction method is compared with pairwise registration using normalised mutual information as a
voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based
solely on intensity information). To quantify registration accuracy, we calculate the target registration error
across the images. The results show that our new dynamic image registration method based on tracer kinetics
yields better realignment of the simulated datasets, halving the target registration error when compared to the
conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation
parameters. We also show that our new method is less affected by the low signal in the first few frames, which
the conventional method based on normalised mutual information fails to realign.
KEYWORDS: Digital breast tomosynthesis, Breast, Detection and tracking algorithms, Sensors, Mammography, X-rays, Reconstruction algorithms, 3D image reconstruction, Image sensors, 3D image processing
We present a novel method for the detection and reconstruction in 3D of microcalcifications in digital breast
tomosynthesis (DBT) image sets. From a list of microcalcification candidate regions (that is, real microcalcification
points or noise points) found in each DBT projection, our method: (1) finds the set of corresponding points of a
microcalcification in all the other projections; (2) locates its 3D position in the breast; (3) highlights noise points; and (4)
identifies the failure of microcalcification detection in one or more projections, in which case the method predicts the
image locations of the microcalcification in the images in which they are missed.
From the geometry of the DBT acquisition system, an "epipolar curve" is derived for the 2D positions a
microcalcification in each projection generated at different angular positions. Each epipolar curve represents a single
microcalcification point in the breast. By examining the n projections of m microcalcifications in DBT, one expects
ideally m epipolar curves each comprising n points. Since each microcalcification point is at a different 3D position,
each epipolar curve will be at a different position in the same 2D coordinate system. By plotting all the
microcalcification candidates in the same 2D plane simultaneously, one can easily extract a representation of the number
of microcalcification points in the breast (number of epipolar curves) and their 3D positions, the noise points detected
(isolated points not forming any epipolar curve) and microcalcification points missed in some projections (epipolar
curves with less than n points).
Cumulative residual entropy (CRE)1,2 has recently been advocated as an alternative to differential entropy for
describing the complexity of an image. CRE has been used to construct an alternate form of mutual information
(MI),3,4 called symmetric cumulative mutual information (SCMI)5 or cross-CRE (CCRE).6 This alternate form
of MI has exhibited superior performance to traditional MI in a variety of ways.6 However, like traditional MI,
SCMI suffers from sensitivity to the changing size of the overlap between images over the course of registration.
Alternative similarity measures based on differential entropy, such as normalized mutual information (NMI),7
entropy correlation coefficient (ECC)8,9 and modified mutual information (M-MI),10 have been shown to exhibit
superior performance to MI with respect to the overlap sensitivity problem. In this paper, we show how CRE can
be used to compute versions of NMI, ECC, and M-MI that we call the normalized cumulative mutual information
(NCMI), cumulative residual entropy correlation coefficient (CRECC), and modified symmetric cumulative
mutual information (M-SCMI). We use publicly available CT, PET, and MR brain images* with known ground
truth transformations to evaluate the performance of these CRE-based similarity measures for rigid multimodal
registration. Results show that the proposed similarity measures provide a statistically significant improvement
in target registration error (TRE) over SCMI.
KEYWORDS: Magnetic resonance imaging, Image registration, Brain, Image fusion, 3D image processing, In vitro testing, In vivo imaging, Spatial resolution, Medical imaging, Image resolution
Introduction - Fusion of histology and MRI is frequently demanded in biomedical research to study in vitro
tissue properties in an in vivo reference space. Distortions and artifacts caused by cutting and staining of
histological slices as well as differences in spatial resolution make even the rigid fusion a difficult task. State-of-
the-art methods start with a mono-modal restacking yielding a histological pseudo-3D volume. The 3D
information of the MRI reference is considered subsequently. However, consistency of the histology volume and
consistency due to the corresponding MRI seem to be diametral goals. Therefore, we propose a novel fusion
framework optimizing histology/histology and histology/MRI consistency at the same time finding a balance
between both goals.
Method - Direct slice-to-slice correspondence even in irregularly-spaced cutting sequences is achieved by
registration-based interpolation of the MRI. Introducing a weighted multi-image mutual information metric
(WI), adjacent histology and corresponding MRI are taken into account at the same time. Therefore, the
reconstruction of the histological volume as well as the fusion with the MRI is done in a single step.
Results - Based on two data sets with more than 110 single registrations in all, the results are evaluated
quantitatively based on Tanimoto overlap measures and qualitatively showing the fused volumes. In comparison
to other multi-image metrics, the reconstruction based on WI is significantly improved. We evaluated different
parameter settings with emphasis on the weighting term steering the balance between intra- and inter-modality
consistency.
Intensity based registration algorithms have proved to be accurate and robust for 3D-3D registration tasks. However, these methods utilise the information content within an image, and therefore their performance is hindered for image data that is sparse. This is the case for the registration of a single image slice to a 3D image volume. There are some important applications that could benefit from improved slice-to-volume registration, for example, the planning of magnetic resonance (MR) scans or cardiac MR imaging, where images are acquired as stacks of single slices. We have developed and validated an information based slice-to-volume registration algorithm that uses vector valued probabilistic images of tissue classification that have been derived from the original intensity images. We believe that using such methods inherently incorporates into the registration framework more information about the images, especially in images containing severe partial volume artifacts. Initial experimental results indicate that the suggested method can achieve a more robust registration compared to standard intensity based methods for the rigid registration of a single thick brain MR slice, containing severe partial volume artifacts in the through-plane direction, to a complete 3D MR brain volume.
We present initial results from evaluating the accuracy with which biomechanical breast models based on finite element methods can predict the displacements of tissue within the breast. We investigate the influence of different tissue elasticity values, Poisson's ratios, boundary conditions, finite element solvers and mesh resolutions on one data set. MR images were acquired before and after compressing a volunteer's breast gently. These images were aligned using a 3D non-rigid registration algorithm. The boundary conditions were derived from the result of the non-rigid registration or by assuming no patient motion at the deep or medial side. Three linear and two non-linear elastic material models were tested. The accuracy of the BBMs was assessed by the Euclidean distance of twelve corresponding anatomical landmarks. Overall, none of the tested material models was obviously superior to another regarding the set of investigated values. A major average error increase was noted for partially inaccurate boundary conditions at high Poisson's ratios due to introduced volume change. Maximal errors remained, however, high for low Poisson's ratio due to the landmarks closeness to the inaccurate boundary conditions. The choice of finite element solver or mesh resolution had almost no effect on the performance outcome.
This work presents a validation study for non-rigid registration of 3D contrast enhanced magnetic resonance mammography images. We are using our previously developed methodology for simulating physically plausible, biomechanical tissue deformations using finite element methods to compare two non-rigid registration algorithms based on single-level and multi-level free-form deformations using B-splines and normalized mutual information. We have constructed four patient-specific finite element models and applied the solutions to the original post-contrast scans of the patients, simulating tissue deformation between image acquisitions. The original image pairs were registered to the FEM-deformed post-contrast images using different free-form deformation mesh resolutions. The target registration error was computed for each experiment with respect to the simulated gold standard on a voxel basis. Registration error and single-level free-form deformation resolution were found to be intrinsically related: the smaller the spacing, the higher localized errors, indicating local registration failure. For multi-level free-form deformations, the registration errors improved for increasing mesh resolution. This study forms an important milestone in making our non-rigid registration framework applicable for clinical routine use.
The generation of patient specific meshes for Finite Element Methods (FEM) with application to brain deformation is a time consuming process, but is essential for the modeling of intra-operative deformation of the brain during neurosurgery using FEM techniques. We present an automatic method for the generation of FEM meshes fitting patient data. The method is based on non-rigid registration of patient MR images to an atlas brain image, followed by deformation of a high-quality mesh of this atlas brain. We demonstrate the technique on brain MRI images from 12 patients undergoing neurosurgery. We show that the FEM meshes generated by our technique are of good quality. We then demonstrate the utility of these FEM meshes by simulating simple neuro-surgical scenarios on example patients, and show that the deformations predicted by our brain model match the observed deformations. The meshes generated by our technique are of good quality, and are suitable for modeling the types of deformation observed during neurosurgery. The deformations predicted by a simple loading scenario match well with those observed following the actual surgery. This paper does not attempt an exhaustive study of brain deformation, but does provide an essential tool for such a study - a method of rapidly generating Finite Element Meshes fitting individual subject brains.
We present a new method for the detection and measurement of volume changes in human hippocampi in serial Magnetic Resonance Imaging (MRI). The method follows a two-stage approach: (1) precise co-registration and intensity matching of the initial (baseline) and follow-up scan, and (2) refinement and segmentation propagation of the hippocampi outlines drawn in the baseline scan by an expert observer to the matched scan (co-registered and intensity matched follow- up scan of the time series). The first step is performed using MRreg, a rigid registration tool based on cross-correlation and intensity matching, and the second step makes use of the concept of active contour models for tracking the hippocampi outlines in the time series.
In this paper we present a hierarchical multiscale shape description tool based on active contour models which enables data-driven quantitative and qualitative shape studies of MR brain images at multiple scales. At large scales, global shape properties are extracted from the image, while smaller scale features are suppressed. At lower scales, the detailed shape characteristics become more prominent. Extracting a shape at different levels of scale yields a hierarchical multiscale shape stack. This shape stack can be used to localize and characterize shape changes like deformations and abnormalities at different levels of scale. The shape description is performed as a set of implicit segmentation steps at multiple scales yielding descriptions of an object at various levels of detail. Implicit segmentation is carried out using the well-known model of active contours. Starting from an initial active contour, several implicit optimization processes with differently regularized energy functions are performed, where the energy functions are represented as functions of scale. The presented algorithm for shape focusing and description based on active contour models shows promising results on extracting and characterizing complex shapes in MR brain images at a large set of scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.