Lobectomy is a common and effective procedure for treating early-stage lung cancers. However, for patients with compromised pulmonary function (e.g. COPD) lobectomy can lead to major postoperative pulmonary complications. A technique for quantitatively predicting postoperative pulmonary function is needed to assist surgeons in assessing candidate’s suitability for lobectomy. We present a framework for quantitatively predicting the postoperative lung physiology and function using a combination of lung biomechanical modeling and machine learning strategies. A set of 10 patients undergoing lobectomy was used for this purpose. The image input consists of pre- and post-operative breath hold CTs. An automated lobe segmentation algorithm and lobectomy simulation framework was developed using a Constrained Adversarial Generative Networks approach. Using the segmented lobes, a patient-specific GPU-based linear elastic biomechanical and airflow model and surgery simulation was then assembled that quantitatively predicted the lung deformation during the forced expiration maneuver. The lobe in context was then removed by simulating a volume reduction and computing the elastic stress on the surrounding residual lobes and the chest wall. Using the deformed lung anatomy that represents the post-operative lung geometry, the forced expiratory volume in 1 second (FEV1) (the amount of air exhaled by a patient in 1 second starting from maximum inhalation), and forced vital capacity (FVC) (the amount of air exhaled by force from maximum inhalation), were then modeled. Our results demonstrated that the proposed approach quantitatively predicted the postoperative lobe-wise lung function at the FEV1 and FEV/FVC.
Adaptive radiotherapy is an effective procedure for the treatment of cancer, where the daily anatomical changes in the patient are quantified, and the dose delivered to the tumor is adapted accordingly. Deformable Image Registration (DIR) inaccuracies and delays in retrieving and registering on-board cone beam CT (CBCT) image datasets from the treatment system with the planning kilo Voltage CT (kVCT) have limited the adaptive workflow to a limited number of patients. In this paper, we present an approach for improving the DIR accuracy using a machine learning approach coupled with biomechanically guided validation. For a given set of 11 planning prostate kVCT datasets and their segmented contours, we first assembled a biomechanical model to generate synthetic abdominal motions, bladder volume changes, and physiological regression. For each of the synthetic CT datasets, we then injected noise and artifacts in the images using a novel procedure in order to mimic closely CBCT datasets. We then considered the simulated CBCT images for training neural networks that predicted the noise and artifact-removed CT images. For this purpose, we employed a constrained generative adversarial neural network, which consisted of two deep neural networks, a generator and a discriminator. The generator produced the artifact-removed CT images while the discriminator computed the accuracy. The deformable image registration (DIR) results were finally validated using the model-generated landmarks. Results showed that the artifact-removed CT matched closely to the planning CT. Comparisons were performed using the image similarity metrics, and a normalized cross correlation of >0.95 was obtained from the cGAN based image enhancement. In addition, when DIR was performed, the landmarks matched within 1.1 +/- 0.5 mm. This demonstrates that using an adversarial DNN-based CBCT enhancement, improved DIR accuracy bolsters adaptive radiotherapy workflow.
KEYWORDS: Optical coherence microscopy, Optical coherence tomography, Microelectromechanical systems, Image resolution, Scanners, 3D image processing, 3D metrology, In vivo imaging, Real time imaging, Metrology, Microscopes, Image processing, Graphics processing units
Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D.
Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite
use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in
a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of
optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus,
enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a
compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight
over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging – noninvasive
real-time imaging with histologic resolution – GD-OCM demonstrates invariant resolution of 2 μm throughout a volume
of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units.
Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.
KEYWORDS: Image registration, Lung, Algorithms, Optimization (mathematics), Monte Carlo methods, Radio optics, Detection and tracking algorithms, Adaptive optics, Optical flow, Radiotherapy, Data modeling, Motion models, 3D modeling, Annealing
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods
In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes.
Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets.
Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject’s CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.
Fast, robust, nondestructive 3D imaging is needed for characterization of microscopic structures in industrial and clinical applications. A custom micro-electromechanical system (MEMS)-based 2D scanner system was developed to achieve 55 kHz A-scan acquisition in a Gabor-domain optical coherence microscopy (GD-OCM) instrument with a novel multilevel GPU architecture for high-speed imaging. GD-OCM yields high-definition volumetric imaging with dynamic depth of focusing through a bio-inspired liquid lens-based microscope design, which has no moving parts and is suitable for use in a manufacturing setting or in a medical environment. A dual-axis MEMS mirror was chosen to replace two single-axis galvanometer mirrors; as a result, the astigmatism caused by the mismatch between the optical pupil and the scanning location was eliminated and a 12x reduction in volume of the scanning system was achieved. Imaging at an invariant resolution of 2 μm was demonstrated throughout a volume of 1 × 1 × 0.6 mm3, acquired in less than 2 minutes. The MEMS-based scanner resulted in improved image quality, increased robustness and lighter weight of the system – all factors that are critical for on-field deployment. A custom integrated feedback system consisting of a laser diode and a position-sensing detector was developed to investigate the impact of the resonant frequency of the MEMS and the driving signal of the scanner on the movement of the mirror. Results on the metrology of manufactured materials and characterization of tissue samples with GD-OCM are presented.
We have developed a cellular resolution imaging modality, Gabor-Domain Optical Coherence Microscopy, which combines the high lateral resolution of confocal microscopy with the high sectioning capability of optical coherence tomography to image deep layers in tissues with high-contrast and volumetric resolution of 2 μm. A novelty of the custom microscope is the biomimetics that incorporates a liquid lens, as in whales’s eyes, for robust and rapid acquisition of volumetric imaging of deep layers in tissue down to 2 mm, thus overcoming the tradeoff between lateral resolution and depth of focus. The system incorporates a handheld scanning optical imaging head and fits on a movable cart that offers the flexibility in different biomedical applications and clinical settings, including ophthalmology. In the later, the microscope has successfully revealed micro-structures within the cornea and in particular the endothelial cells microenvironment, an important step in understanding the mechanisms of Fuchs’ dystrophy, a leading cause of the loss of corneal transparency. Also, the system was able to provide high definition of the edge of soft contact lenses, which is important for the fitting of the lens and the comfort of the patient. Overall, the imaging modality provides the opportunity to observe the three-dimensional features of different structures with micrometer resolution, which opens a wide range of future applications.
A PMMA based plastic optical fibre sensor for use in real time radiotherapy dosimetry is presented. The optical fibre tip is coated with a scintillation material, terbium-doped gadolinium oxysulfide (Gd2O2S:Tb), which fluoresces when exposed to ionising radiation (X-Ray). The emitted visible light signal penetrates the sensor optical fibre and propagates along the transmitting fibre at the end of which it is remotely monitored using a fluorescence spectrometer. The results demonstrate good repeatability, with a maximum percentage error of 0.5% and the response is independent of dose rate.
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm 3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
The emergence of several trends, including the increased availability of wireless networks, miniaturization of electronics
and sensing technologies, and novel input and output devices, is creating a demand for integrated, fulltime displays for
use across a wide range of applications, including collaborative environments. In this paper, we present and discuss
emerging visualization methods we are developing particularly as they relate to deployable displays and displays worn
on the body to support mobile users, as well as optical imaging technology that may be coupled to 3D visualization in
the context of medical training and guided surgery.
A framework for real-time visualization of a tumor-influenced lung dynamics is presented in this paper. This framework potentially allows clinical technicians to visualize in 3D the morphological changes of lungs under different breathing conditions. Consequently, this technique may provide a sensitive and accurate assessment tool for pre-operative and intra-operative clinical guidance. The proposed simulation method extends work previously developed for modeling and visualizing normal 3D lung dynamics. The model accounts for the changes in the regional lung functionality and the global motor response due to the presence of a tumor. For real-time deformation purposes, we use a Green's function (GF), a physically based approach that allows real-time multi-resolution modeling of the lung deformations. This function also allows an analytical estimation of the GF's deformation parameters from the 4D lung datasets at different level-of-details of the lung model. Once estimated, the subject-specific GF facilitates the simulation of tumor-influenced lung deformations subjected to any breathing condition modeled by a parametric Pressure-Volume (PV) relation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.