Accurate lung nodule localization during Video-Assisted Thoracic Surgery (VATS) for the treatment of early-stage lung cancer is a surgical challenge. Recently, a new minimally invasive approach for nodule localization during VATS has been proposed, which consists in compensating by a biomechanical model the very large lung deformations occurring before and during surgery. This estimation of the deformations allows to transfer the position of the nodule visible on the preoperative CT to an acquisition of the lung performed during the operation using a Cone-Beam CT scanner (CBCT two). But, in this approach, an additional CBCT acquisition (CBCT one) must also be acquired just after the patient is placed in the operative position in order to estimate the deformations due to the change of the patient’s position, from supine during the CT acquisition to lateral decubitus in the operating room. Our goal is to simplify this procedure and thus reduce the radiation dose to the patient. To this end, we propose to improve this solution by replacing the CBCT one acquisition by a model allowing to predict these deformations. This model is defined using the lung state information from CBCT two and a general statistical motion model built from the position change deformations already observed on other patients. We have data from 17 patients. The method is evaluated with a leave-one-out cross-validation on its ability to reproduce the observed deformations. The method reduces the average prediction error from 12.12 mm without prediction to 8.09 mm for an average prediction, and finally to 6.33 mm for a prediction with our model fitted to CBCT two only.
Image-guided thermal ablations have become an important therapeutic option for patient with cardiac arrhythmia, it is minimally-invasive and provides better and faster patient recovery. However, to enhance the ablation guidance, the therapist needs to link by image registration the intraoperative images to the high-resolution anatomical preoperative imaging, in which the ablation path has been defined. In this work, we present a convolutional neural networks (CNNs) framework for transesophageal ultrasound/computed tomography image registration to solve the problem of high computation time of the classical iterative methods, which is not suitable for a real-time application. We propose the following process: we first pass the input moving and fixed image pairs through a siamese architecture consisting of convolutional layers, thus extracting features of moving and fixed maps analogous to dense local descriptors, then matching the feature maps, and finally pass this correspondence feature map into a registration network, which directly outputs the registration parameters set of the rigid registration. Accuracy of the registration is quantified based on the Target Registration Error (TRE) for specific anatomical landmarks. Results of the registration process show a median TRE of 2.2 mm for all the fiducial points, and the registration computation time was around 3 ms comparing to the classic iterative methods which takes around 70 seconds for one image pair. In our future work we are going to perform our approach on 2D/3D learning-based registration to refine the estimation of the transesophageal probe pose in the 3D preoperative volume.
In this paper, we address the particularly challenging problem of calibrating a stereo pair of low resolution (80 × 60) thermal cameras. We propose a new calibration method for such setup, based on sub-pixel image analysis of an adequate calibration pattern and bootstrap methods. The experiments show that the method achieves robust calibration with a quarter-pixel re-projection error for an optimal set of 35 input stereo pairs of the calibration pattern, which namely outperforms the standard OpenCV stereo calibration procedure.
Video-Assisted Thoracoscopic Surgery (VATS) is a promising surgical treatment for early-stage lung cancer. With
respect to standard thoracotomy, it is less invasive and provides better and faster patient recovery. However, a
main issue is the accurate localization of small, subsolid nodules. While intraoperative Cone-Beam CT (CBCT)
images can be acquired, they cannot be directly compared with preoperative CT images due to very large lung
deformations occurring before and during surgery. This paper focuses on the quantification of deformations
due to the change of positioning of the patient, from supine during CT acquisition to lateral decubitus in the
operating room. A method is first introduced to segment the lung cavity in both CT and CBCT. The images
are then registered in three steps: an initial alignment, followed by rigid registration and finally non-rigid
registration, from which deformations are measured. Accuracy of the registration is quantified based on the
Target Registration Error (TRE) between paired anatomical landmarks. Results of the registration process are
on the order of 1.01 mm in median, with minimum and maximum errors 0.35 mm and 2.34 mm. Deformations
on the parenchyma were measured to be up to 14 mm and approximately 7 mm in average for the whole lung
structure. While this study is only a first step towards image-guided therapy, it highlights the importance
of accounting for lung deformation between preoperative and intraoperative images, which is crucial for the
intraoperative nodule localization.
The finite mixture model based on the Gaussian distribution is a flexible and powerful tool to address image segmentation. However, in the case of ultrasound images, the intensity distributions are non-symmetric whereas the Gaussian distribution is symmetric. In this study, a new finite bounded Rayleigh distribution is proposed. One advantage of the proposed model is that Rayleigh distribution is non-symmetric which has ability to fit the shape of medical ultrasound data. Another advantage is that each component of the proposed model is suitable for the ultrasound image segmentation. We also apply the bounded Rayleigh mixture model in order to improve the accuracy and to reduce the computational time. Experiments show that the proposed model outperforms the state-of-art methods on time consumption and accuracy.
In prostate cancer external beam radiotherapy, pelvic structures identification in computed tomography (CT) is required for the treatment planning and is performed manually by experts. Prostate manual delineations in CT modality is time consuming and prone to observer variability. We propose a fully automated process using a combination of a Random Forests (RF) classification and Spherical Harmonics (SPHARM) to identify the prostate boundaries. The proposed method outperformed classical atlas based approach from the literature. Combining RF to detect the prostate and SPHARM for shape regularization provided promising results for automatic prostate segmentation.
KEYWORDS: Visualization, Image quality, Medical imaging, Data modeling, Taxonomy, Image visualization, Visual process modeling, Data visualization, Visual system, Medicine
Among the several medical imaging stages (acquisition, reconstruction, etc.), visualization is the latest stage on which decision is generally taken. Scientific visualization tools allow to process complex data into a graphical visible and understandable form, the goal being to provide new insight. If the evaluation of procedures is a crucial issue and a main concern in medicine, paradoxically visualization techniques, predominantly in tri-dimensional imaging, have not been the subject of many evaluation studies. This is perhaps due to the fact that the visualization process integrates the Human Visual and Cognitive Systems, which makes evaluation especially difficult. However, as in medical imaging, the question of quality evaluation of a specific visualization remains a main challenge. While a few studies concerning specific cases have already been published, there is still a great need for definition and systemization of evaluation methodologies. The goal of our study is to propose such a framework, which makes it possible to take into account all the parameters taking part in the evaluation of a visualization technique. Concerning the problem of quality evaluation in data visualization in general, and in medical data visualization in particular, three different concepts appear to be fundamental: the type and level of components used to convey to the user the information contained in the data, the type and level at which evaluation can be performed, and the methodologies used to perform such evaluation. We propose a taxonomy involving types of methods that can be used to perform evaluation at different levels.
The visual analysis of Stereoelectroencephalographic (SEEG) signals in their anatomical context is aimed at the understanding of the spatio-temporal dynamics of epileptic processes. The magnitude of these signals may be encoded by graphical glyphs, having a direct impact on the perception of the values. Our study is devoted to the evaluation of the quantitative visualization of these signals, specifically to the influence of the coding scheme of the glyphs on the understanding and the analysis of the signals. This work describes an experiment conducted with human observers in order to evaluate three different coding schemes used to visualize the magnitude of SEEG signals in their 3D anatomical context. We intended to study if any of these coding schemes allows better performances for the human observers in two aspects: accuracy and speed. A protocol has been developed in order to measure these aspects. The results that will be presented in this work were obtained from 40 human observers. The comparison between the three coding schemes has first been performed through an Exploratory Data Analysis (EDA). The statistical significance of this comparison has then been established using nonparametric methods. The influence on the observers' performance of some other factors has also been investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.