Coronary heart disease is a common cause of death for human being. To treat artery stenosis due to accumulation of atheromatous plaques, stents are implanted to support the narrowing vessel. The relative position between stent and vascular wall is a critical factor for evaluating the treatment. However, low signal to noise ratio (SNR) of fluoroscopy sequences make it difficult for doctors to observe the stents clearly. In order to improve the clarity of stent effectively, the paper describes a novel algorithm based on deep neural network for stent markers detection so as to realize the time domain stacking. In this step, the response map generation model with weighted loss was designed to concentrate on small objects detection, which has unbalanced annotation between background and targets. In addition, a focus conversion learning algorithm by deblurring network was proposed for edge sharpness and the spatial resolution improvement to decrease influences by focus size. It can locate the marker pair successfully in both phantom and clinical images with 84.04% correction rate in marker detection and decrease the mean square error in the focus conversion algorithm. After quantitative indexes comparison and observation, it reveals that the proposed algorithm can effectively enhance stents without manual annotation, which provides the assistance to evaluate the treatment exactly.
Radiation dose is of an important consideration for x-ray fluoroscopy imaging of interventional C-arm systems. Low-dose imaging is always expected, but it also results in noisy images. Noise reduction is one of the important topics for fluoroscopic images. Recently, the advances in deep learning have achieved outstanding denoising results for x-ray images. However, most existing methods in the field focus only on 2D image denoising from frame-by-frame independently, and removing temporal noise in image sequence remains a challenging problem. Our goal is simultaneously to reduce both spatial and temporal noises for fluoroscopic image sequences in a unified framework. In this paper, we propose a deep learning algorithm that extensively utilizes temporal information to maximize the efficiency of noise reduction. The proposed convolutional neural network (CNN) is based on DenseNet1 and DnCNN2 but with improved multi-channel input layers for image sequence. That network architecture not only enables spatial domain deep learning from the input of every individual frame, but also is able to make full use of temporally correlative information among adjacent frames for temporal domain learning. In order to further suppress temporal noise resulting in visual flickers of image sequence, an additional term is introduced to the network loss function. Besides two conventional terms of L2 and perceptual losses, the new proposed loss calculates the statistical variance of the network performance caused by random influence of temporal imaging. The developed algorithm is evaluated with fluoroscopic phantom images and clinical patient data, showing superior performance for spatio-temporal denoising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.