Motion tracking aims to accurately localize the moving lesion during radiotherapy to ensure the accuracy of radiation delivery. Ultrasound (US) imaging is a promising imaging modality to guide radiation therapy in real time. This study proposed a deep learning-based motion tracking method to track the moving lesion in US images. To reduce the searching region, a box regression-based method is adopted to predefine a region of interest (ROI). Within the ROI, the feature pyramid network (FPN) that uses a top-down architecture with lateral connection was adopted to extract image features, and the region proposal network (RPN) that learns the attention mechanism of the annotated anatomical landmarks was then used to yield a number of proposals. The training of the networks was supervised by three training objectives, including a bounding box regression loss, a proposal classification loss and a classification loss. In addition, we employed long-short-term-memory (LSTM) to capture the temporal features from the US image sequence. The weights from transform learning were used as the initial values of our network. Two-dimensional liver US images from 24 patients and the corresponding annotated anatomical landmarks were used to train our proposed method. In the testing experiments on 11 patients, our method achieves a mean tracking error of 0.58 mm with a standard derivation of 0.44 mm in a temporal resolution of 69 frames per second. Our proposed method provides an effective and clinically feasible solution to monitor the lesion motion in radiation therapy.
The coronavirus pandemic, also known as COVID-19 pandemic, has led to tens of millions of cases and over half of a million deaths as of August 2020. Chest CT is an important imaging tool to evaluate the severity of the lung involvement which often correlates with the severity of the disease. Quantitative analysis of CT lung images requires the localization of the infection area on the image or the identification of the region of interest (ROI). In this study, we propose an automatic ROI identification based on the recent feature selection method, called concrete autoencoder, that learns the parameters of concrete distributions from the given data to choose pixels from the images. To improve the discrimination of these features, we proposed a discriminative concrete autoencoder (DCA) by adding a classification head to network. This classification head is used to perform the image classification. We conducted a study with 30 CT image sets from 15 Covid-19 positive and 15 COVID19 negative cases. When we used the DCA to select the pixels of the suspected area, the classification accuracy was 76.27% for the image sets. Without DCA feature selection, the traditional neural network achieved an accuracy of 69.41% for the same image sets. Hence, the proposed DCA could detect significant features to identify the COVID-19 infected area of lung. Future work will focus on surveying more data, designing area selection layer towards group selection.
Reconstructing catheters on medical images is a crucial step in high-dose-rate (HDR) brachytherapy for treating prostate cancer. However, manually identify the catheters is labor intensive. With its superior soft-tissue contrast, magnetic resonance imaging (MRI) can provide superior anatomic visualization of prostate gland and its surrounding tissues such as the rectum and the bladder. There is a considerable increase of using MRI-guided HDR prostate brachytherapy over the past decades. By incorporating MRI into prostate brachytherapy procedure, the therapeutic ratio could be improved attribute to MRI's capability of differentiating dominant prostatic lesions. However, it has been realized that challenge remains in recognition of multiple catheters in MRI because catheters used in routine HDR prostate brachytherapy appear dark, thus can be easily confused with blood vessels. In this study, we developed a deep learningbased catheter reconstruction method to tackle the challenge. Particularly, a 3D mask scoring regional convolutional neural network has been implemented to automatically identify all the catheters in MRI that are acquired after catheters insertion during HDR prostate brachytherapy. The network was trained using the paired MR images and binary catheter annotation images offered by experienced medical physicists as ground truth. After the network was trained, the locations, sizes and shapes of all the catheters can be predicted given MR images of a new prostate cancer patient receiving HDR brachytherapy. Quantities including catheter tip and shaft errors were computed to assess our proposed method. Our method detected 164 catheters from 11 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.62±1.83 mm and a catheter shaft error of 0.94±0.52 mm. The proposed multi-catheter reconstruction method has the capability of precisely localizing the tips and shafts of catheters in 3D MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and treatment outcome of MRI-guided HDR prostate brachytherapy.
Digitalizing all the needles in ultrasound (US) images is a crucial step of treatment planning for US-guided high-dose-rate (HDR) prostate brachytherapy. However, current computer-aided technologies are broadly focused on single-needle digitization, while manual digitization of all needles is labor intensive and time consuming. In this paper, we proposed a deep learning-based workflow for fast automatic multi-needle digitization, including needle shaft detection and needle tip detection. The major workflow is composed of two components: a large margin mask R-CNN model (LMMask R-CNN), which adopts the lager margin loss to reformulate Mask R-CNN for needle shaft localization, and a needle-based density-based spatial clustering of application with noise (DBSCAN) algorithm which integrates priors to model a needle in an iteration for a needle shaft refinement and tip detections. Besides, we use the skipping connection in neural network architecture to improve the supervision in hidden layers. Our workflow was evaluated on 23 patients who underwent USguided HDR prostrate brachytherapy with 339 needles being tested in total. Our method detected 98% of the needles with 0.0911±0.0427 mm shaft error and 0.3303±0.3625 mm tip error. Compared with only using mask R-CNN and only using LMMask R-CNN, the proposed method gains a significant improvement of accuracy on both shaft and tip localization. The proposed method automatically digitizes needles per patient with in a second. It streamlines the workflow of USguided HDR prostate brachytherapy and paves the way for the development of real-time treatment planning system that is expected to further elevate the quality and outcome of HDR prostate brachytherapy.
KEYWORDS: Ultrasonography, High dynamic range imaging, Prostate cancer, Prostate, Image segmentation, 3D modeling, 3D image processing, Silver, Binary data
A deep-learning model based on the U-Net architecture was developed to segment multiple needles in the 3D transrectal ultrasound (TRUS) images. Attention gates were adopted in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images from simulation CTs were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient TRUS images during high-dose-rate (HDR) prostate brachytherapy. The needle shaft and tip errors against CT-based ground truth were used to evaluate other methods and other methods as comparison. Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.29±0.24 mm at shaft error and 0.442±0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference was observed (p = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deep supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error. Besides, to our best knowledge, this is the first attempt on multi-needle localization in the prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time radiation plan dose assessment tools that can further elevate the quality and outcome of prostate HDR brachytherapy.
KEYWORDS: Associative arrays, 3D image processing, Prostate, Ultrasonography, Cancer, Visualization, 3D acquisition, Detection and tracking algorithms, Reconstruction algorithms, Prostate cancer
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow of multi-needle detection via considering the images without needles as auxiliary. Specifically, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we developed an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning (ORDL). Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to determine the centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm (RANSAC) per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments are conducted on a prostate data set of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our approach can correctly detect 95% needles with a tip location error of 1.01 mm on the prostate dataset. This technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy and facilitate the clinical workflow.
KEYWORDS: Computed tomography, Associative arrays, 3D image processing, Prostate, 3D modeling, Process modeling, Reconstruction algorithms, Ultrasonography, High dynamic range imaging, Visualization
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided prostate high dose rate (HDR) brachytherapy. In this paper, we propose a workflow for multineedle detection in 3D ultrasound (US) images with corresponding CT images used for supervision. Since the CT images do not exactly match US images, we propose a novel sparse model, dubbed Bidirectional Convolutional Sparse Coding (BiCSC), to tackle this weakly supervised problem. BiCSC aims to extract the latent features from US and CT and then formulate a relationship between them where the learned features from US yield to the features from CT. Resultant images allow for clear visualization of the needle while reducing image noise and artifacts. On the reconstructed US images, a clustering algorithm is employed to find the cluster centers which correspond to the true needle position. Finally, the random sample consensus algorithm (RANSAC) is used to model a needle per ROI. Experiments are conducted on prostate image datasets from 10 patients. Visualization and quantitative results show the efficacy of our proposed workflow. This learning-based technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy, and further enhance the clinical workflow for prostate HDR brachytherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.