We proposed a deep learning-based method for single-heartbeat 4D cardiac CT reconstruction, where a single cardiac cycle was split into multiple phases for reconstruction. First, we pre-reconstruct each phase using the projection data from itself and the neighboring phases. The pre-reconstructions are fed into a supervised registration network to generate the deformation fields between different phases. The deformation fields are trained so that it can match the ground truth images from the corresponding phases. The deformation fields are then used in the FBP-and-wrap method for motion-compensated reconstruction, where a subsequent network is used to remove residual artifacts. The proposed method was validated with simulation data from 40 4D cardiac CT scans and demonstrated improved RMSE and SSIM and less blurring compared to FBP and PICCS.
NeuralCT [1] has been recently proposed as an implicit neural representation-based image reconstruction that can produce time-resolved images from CT sinograms and reduce motion artifacts, even when undergoing complex motions. NeuralCT does not require the prior motion model or estimation of object motion. Instead, it utilizes a network to implicitly represent the time-varying object boundary by singed distance function and optimizes the network via differentiable rendering. In this work, we modify the NeuralCT framework to reconstruct scenes that have multiple moving objects with distinct attenuation levels. We show that the performance of NeuralCT reconstruction depends on the quality of the initialization of the network (in this case, object segmentation in motion-corrupted FBP image). We show how spatially aware object segmentation can improve motion-corrected reconstruction in moving objects with multiple attenuation levels despite high angular motion and complex topological changes.
Detection of left ventricular (LV) wall motion abnormalities (WMA) from 4DCT by visual interpretation is challenging. Quantitative assessment requires complex computation on multiple frames with large data sizes. Volume Rendering (VR) of the LV in CT across the cardiac cycle can enable the evaluation of 3D wall motion with significantly reduced data size. We propose a deep-learning (DL) framework to automate WMA detection in volume-rendered videos of clinical 4DCT studies. For 253 cardiac 4DCT studies, 6 VR videos depicting the LV were automatically generated corresponding to views rotated every 60 degrees around the long axis. Ground-truth WMA classification was performed for each video of the LV views by evaluating the extent of impaired regional shortening that was visible in that view. For DL prediction, videos were first processed by a pre-trained CNN “Inception V3” to extract image features. Then, extracted features from multiple frames were concatenated into a matrix and then input into a long short-term memory for the binary classification of WMA presence in the video. Studies were classified as abnormal if ≥2 out of 6 videos were abnormal. Studies were split chronologically so that the first 174 patients were used in 5-fold cross-validation and the final 79 studies were used in testing. VR significantly compressed data size (~800-fold). DL classification of WMA had high (<=89%) per-video and perstudy accuracy, sensitivity, and specificity for both cross-validation and testing cohorts. This novel method may offer a simple and accurate way to screen CT cases for WMA from highly compressed data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.