Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply “move” voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision (“missing tissue”). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.
Purpose: An increasingly popular minimally invasive approach to resection of oropharyngeal / base-of-tongue cancer is made possible by a transoral technique conducted with the assistance of a surgical robot. However, the highly deformed surgical setup (neck flexed, mouth open, and tongue retracted) compared to the typical patient orientation in preoperative images poses a challenge to guidance and localization of the tumor target and adjacent critical anatomy. Intraoperative cone-beam CT (CBCT) can account for such deformation, but due to the low contrast of soft-tissue in CBCT images, direct localization of the target and critical tissues in CBCT images can be difficult. Such structures may be more readily delineated in preoperative CT or MR images, so a method to deformably register such information to intraoperative CBCT could offer significant value. This paper details the initial implementation of a deformable registration framework to align preoperative images with the deformed intraoperative scene and gives preliminary evaluation of the geometric accuracy of registration in CBCT-guided TORS. Method: The deformable registration aligns preoperative CT or MR to intraoperative CBCT by integrating two established approaches. The volume of interest is first segmented (specifically, the region of the tongue from the tip to the hyoid), and a Gaussian mixture (GM) mode1 of surface point clouds is used for rigid initialization (GMRigid) as well as an initial deformation (GMNonRigid). Next, refinement of the registration is performed using the Demons algorithm applied to distance transformations of the GM-registered and CBCT volumes. The registration accuracy of the framework was quantified in preliminary studies using a cadaver emulating preoperative and intraoperative setups. Geometric accuracy of registration was quantified in terms of target registration error (TRE) and surface distance error. Result: With each step of the registration process, the framework demonstrated improved registration, achieving mean TRE of 3.0 mm following the GM rigid, 1.9 mm following GM nonrigid, and 1.5 mm at the output of the registration process. Analysis of surface distance demonstrated a corresponding improvement of 2.2, 0.4, and 0.3 mm, respectively. The evaluation of registration error revealed the accurate alignment in the region of interest for base-of-tongue robotic surgery owing to point-set selection in the GM steps and refinement in the deep aspect of the tongue in the Demons step. Conclusions: A promising framework has been developed for CBCT-guided TORS in which intraoperative CBCT provides a basis for registration of preoperative images to the highly deformed intraoperative setup. The registration framework is invariant to imaging modality (accommodating preoperative CT or MR) and is robust against CBCT intensity variations and artifact, provided corresponding segmentation of the volume of interest. The approach could facilitate overlay of preoperative planning data directly in stereo-endoscopic video in support of CBCT-guided TORS.
Localizing sub-palpable nodules in minimally invasive video-assisted thoracic surgery (VATS) presents a significant
challenge. To overcome inherent problems of preoperative nodule tagging using CT fluoroscopic guidance, an
intraoperative C-arm cone-beam CT (CBCT) image-guidance system has been developed for direct localization of
subpalpable tumors in the OR, including real-time tracking of surgical tools (including thoracoscope), and video-CBCT
registration for augmentation of the thoracoscopic scene. Acquisition protocols for nodule visibility in the inflated and
deflated lung were delineated in phantom and animal/cadaver studies. Motion compensated reconstruction was
implemented to account for motion induced by the ventilated contralateral lung. Experience in CBCT-guided targeting of
simulated lung nodules included phantoms, porcine models, and cadavers. Phantom studies defined low-dose acquisition
protocols providing contrast-to-noise ratio sufficient for lung nodule visualization, confirmed in porcine specimens with
simulated nodules (3-6mm diameter PE spheres, ~100-150HU contrast, 2.1mGy). Nodule visibility in CBCT of the
collapsed lung, with reduced contrast according to air volume retention, was more challenging, but initial studies
confirmed visibility using scan protocols at slightly increased dose (~4.6-11.1mGy). Motion compensated reconstruction
employing a 4D deformation map in the backprojection process reduced artifacts associated with motion blur.
Augmentation of thoracoscopic video with renderings of the target and critical structures (e.g., pulmonary artery) showed
geometric accuracy consistent with camera calibration and the tracking system (2.4mm registration error). Initial results
suggest a potentially valuable role for CBCT guidance in VATS, improving precision in minimally invasive, lungconserving
surgeries, avoid critical structures, obviate the burdens of preoperative localization, and improve patient
safety.
Conventional surgical tracking configurations carry a variety of limitations in line-of-sight, geometric accuracy, and
mismatch with the surgeon's perspective (for video augmentation). With increasing utilization of mobile C-arms,
particularly those allowing cone-beam CT (CBCT), there is opportunity to better integrate surgical trackers at bedside to
address such limitations. This paper describes a tracker configuration in which the tracker is mounted directly on the Carm.
To maintain registration within a dynamic coordinate system, a reference marker visible across the full C-arm
rotation is implemented, and the "Tracker-on-C" configuration is shown to provide improved target registration error
(TRE) over a conventional in-room setup - (0.9±0.4) mm vs (1.9±0.7) mm, respectively. The system also can generate
digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool ("x-ray flashlight"), the tracker, or the
C-arm ("virtual fluoroscopy"), with geometric accuracy in virtual fluoroscopy of (0.4±0.2) mm. Using a video-based
tracker, planning data and DRRs can be superimposed on the video scene from a natural perspective over the surgical
field, with geometric accuracy (0.8±0.3) pixels for planning data overlay and (0.6±0.4) pixels for DRR overlay across all
C-arm angles. The field-of-view of fluoroscopy or CBCT can also be overlaid on real-time video ("Virtual Field Light")
to assist C-arm positioning. The fixed transformation between the x-ray image and tracker facilitated quick, accurate
intraoperative registration. The workflow and precision associated with a variety of realistic surgical tasks were
significantly improved using the Tracker-on-C - for example, nearly a factor of 2 reduction in time required for C-arm
positioning, reduction or elimination of dose in "hunting" for a specific fluoroscopic view, and confident placement of
the x-ray FOV on the surgical target. The proposed configuration streamlines the integration of C-arm CBCT with realtime
tracking and demonstrated utility in a spectrum of image-guided interventions (e.g., spine surgery) benefiting from
improved accuracy, enhanced visualization, and reduced radiation exposure.
Intraoperative cone-beam CT (CBCT) could offer an important advance to thoracic surgeons in directly localizing
subpalpable nodules during surgery. An image-guidance system is under development using mobile C-arm CBCT to
directly localize tumors in the OR, potentially reducing the cost and logistical burden of conventional preoperative
localization and facilitating safer surgery by visualizing critical structures surrounding the surgical target (e.g.,
pulmonary artery, airways, etc.). To utilize the wealth of preoperative image/planning data and to guide targeting under
conditions in which the tumor may not be directly visualized, a deformable registration approach has been developed that
geometrically resolves images of the inflated (i.e., inhale or exhale) and deflated states of the lung. This novel technique
employs a coarse model-driven approach using lung surface and bronchial airways for fast registration, followed by an
image-driven registration using a variant of the Demons algorithm to improve target localization to within ~1 mm. Two
approaches to model-driven registration are presented and compared - the first involving point correspondences on the
surface of the deflated and inflated lung and the second a mesh evolution approach. Intensity variations (i.e., higher
image intensity in the deflated lung) due to expulsion of air from the lungs are accounted for using an a priori lung
density modification, and its improvement on the performance of the intensity-driven Demons algorithm is
demonstrated. Preliminary results of the combined model-driven and intensity-driven registration process demonstrate
accuracy consistent with requirements in minimally invasive thoracic surgery in both target localization and critical
structure avoidance.
The ability to perform fast, accurate, deformable registration with intraoperative images featuring surgical excisions was
investigated for use in cone-beam CT (CBCT) guided head and neck surgery. Existing deformable registration methods
generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the
images with no ability to account for tissue that is removed (or introduced) between scans. We have thus developed an
approach in which an extra dimension is added during the registration process to act as a sink for voxels removed during
the course of the procedure. A series of cadaveric images acquired using a prototype CBCT-capable C-arm were used to
model tissue deformation and excision occurring during a surgical procedure, and the ability of deformable registration
to correctly account for anatomical changes under these conditions was investigated. Using a previously developed
version of the Demons deformable registration algorithm, we identify the difficulties that traditional registration
algorithms encounter when faced with excised tissue and present a modified version of the algorithm better suited for
use in intraoperative image-guided procedures. Studies were performed for different deformation and tissue excision
tasks, and registration performance was quantified in terms of the ability to accurately account for tissue excision while
avoiding spurious deformations arising around the excision.
Intraoperative imaging modalities are becoming more prevalent in recent years, and the need for integration of these modalities
with surgical guidance is rising, creating new possibilities as well as challenges. In the context of such emerging
technologies and new clinical applications, a software architecture for cone-beam CT (CBCT) guided surgery has been
developed with emphasis on binding open-source surgical navigation libraries and integrating intraoperative CBCT with
novel, application-specific registration and guidance technologies. The architecture design is focused on accelerating
translation of task-specific technical development in a wide range of applications, including orthopaedic, head-and-neck,
and thoracic surgeries. The surgical guidance system is interfaced with a prototype mobile C-arm for high-quality CBCT
and through a modular software architecture, integration of different tools and devices consistent with surgical workflow
in each of these applications is realized. Specific modules are developed according to the surgical task, such as: 3D-3D
rigid or deformable registration of preoperative images, surgical planning data, and up-to-date CBCT images; 3D-2D
registration of planning and image data in real-time fluoroscopy and/or digitally reconstructed radiographs (DRRs);
compatibility with infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements;
augmented overlay of image and planning data in endoscopic or in-room video; real-time "virtual fluoroscopy" computed
from GPU-accelerated DRRs; and multi-modality image display. The platform aims to minimize offline data processing
by exposing quantitative tools that analyze and communicate factors of geometric precision. The system was
translated to preclinical phantom and cadaver studies for assessment of fiducial (FRE) and target registration error (TRE)
showing sub-mm accuracy in targeting and video overlay within intraoperative CBCT. The work culminates in the development
of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm
CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.
Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.
A prototype mobile C-arm for cone-beam CT (CBCT) has been translated to a prospective clinical trial in head and neck
surgery. The flat-panel CBCT C-arm was developed in collaboration with Siemens Healthcare, and demonstrates both
sub-mm spatial resolution and soft-tissue visibility at low radiation dose (e.g., <1/5th of a typical diagnostic head CT).
CBCT images are available ~15 seconds after scan completion (~1 min acquisition) and reviewed at bedside using
custom 3D visualization software based on the open-source Image-Guided Surgery Toolkit (IGSTK). The CBCT C-arm
has been successfully deployed in 15 head and neck cases and streamlined into the surgical environment using human
factors engineering methods and expert feedback from surgeons, nurses, and anesthetists. Intraoperative imaging is
implemented in a manner that maintains operating field sterility, reduces image artifacts (e.g., carbon fiber OR table) and
minimizes radiation exposure. Image reviews conducted with surgical staff indicate bony detail and soft-tissue
visualization sufficient for intraoperative guidance, with additional artifact management (e.g., metal, scatter) promising
further improvements. Clinical trial deployment suggests a role for intraoperative CBCT in guiding complex head and
neck surgical tasks, including planning mandible and maxilla resection margins, guiding subcranial and endonasal
approaches to skull base tumours, and verifying maxillofacial reconstruction alignment. Ongoing translational research
into complimentary image-guidance subsystems include novel methods for real-time tool tracking, fusion of endoscopic
video and CBCT, and deformable registration of preoperative volumes and planning contours with intraoperative CBCT.
Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been
developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and
tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D
image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for
real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical
plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT
images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and
electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal
length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera
are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different
perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the
endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT
[surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical
structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical
sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in
CBCT-guided head and neck surgery.
High-quality intraoperative 3D imaging systems such as cone-beam CT (CBCT) hold considerable promise for imageguided
surgical procedures in the head and neck. With a large amount of preoperative imaging and planning information
available in addition to the intraoperative images, it becomes desirable to be able to integrate all sources of imaging
information within the same anatomical frame of reference using deformable image registration. Fast intensity-based
algorithms are available which can perform deformable image registration within a period of time short enough for
intraoperative use. However, CBCT images often contain voxel intensity inaccuracy which can hinder registration
accuracy - for example, due to x-ray scatter, truncation, and/or erroneous scaling normalization within the 3D
reconstruction algorithm. In this work, we present a method of integrating an iterative intensity matching step within the
operation of a multi-scale Demons registration algorithm. Registration accuracy was evaluated in a cadaver model and
showed that a conventional Demons implementation (with either no intensity match or a single histogram match)
introduced anatomical distortion and degradation in target registration error (TRE). The iterative intensity matching
procedure, on the other hand, provided robust registration across a broad range of intensity inaccuracies.
A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in
head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and
soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative
imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel
subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and
endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a
newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and
registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video.
Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor
excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of
intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage
to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial
involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the
clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable
basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such
systems in complex OR environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.