PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A routine 3D transrectal ultrasound (TRUS) volume is usually captured with large slice thickness (e.g., 2-5mm). Such ultrasound images with low out-of-slice resolution affect contouring and needle/seed detection in prostate brachytherapy. The purpose of this study is to develop a deep-learning-based method to construct high-resolution images from routinely captured prostate ultrasound images for brachytherapy. We propose to integrate a deeply supervised attention model into a Generative Adversarial Network (GAN)-based framework to improve ultrasound image resolution. Deep attention GANs are introduced to enable end-to-end encoding-and-decoding learning. Next, an attention model is used to retrieve the most relevant information from the encoder. The residual network is used to learn the difference between low- and highresolution images. This technique was validated with 20 patients. We performed a leave-one-out cross-validation method to evaluate the proposed algorithm. Our reconstructed, high-resolution TRUS images from down-sampled images were compared with the original image to evaluate the performance quantitatively. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) of image intensity profiles between reconstructed and original images were 6.5 ± 0.5 and 38.0 ± 2.4dB.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.