Tonsillectomy, one of the most common surgical procedures worldwide, is often associated with postoperative complications, particularly bleeding. Tonsil laser ablation has been proposed as a safer alternative; however, its adoption has been limited because it can be difficult for a surgeon to visually control the thermal interactions that occur between the laser and the tissue. In this study, we propose to monitor the ablation caused by a CO2 laser on ex-vivo tonsil tissue using photoacoustic imaging. Soft tissue’s unique photoacoustic spectra were used to distinguish between ablated and non-ablated tissue. Our results suggest that photoacoustic imaging is able to visualize necrosis formation and calculate the necrotic extent, offering the potential for improved tonsil laser ablation outcomes.
Until today, the role of lasers in surgery has been mostly limited to tissue cutting (laser scalpels), which roboticists have supported by developing micro-mechanical systems for accurate, tremor-free laser aiming. As medical science evolves, we keep discovering new ways in which laser light can be used for surgical treatment: There are new types of laser-based treatment being pioneered where thermal necrosis – not resection! – is the goal. For these treatments to work, besides laser aiming, it is vital to also monitor and control the interactions between the laser and the tissue. These interactions, however, are notoriously hard to control, both by humans and machines, as they involve fast, highly nonlinear physical phenomena that can be challenging to model and even perceive adequately. My research vision is to enable a new generation of surgical robots, capable of intelligently monitoring and controlling surgical laser-tissue interactions. These robots will continuously monitor the status of a procedure and assist physicians in regulating laser delivery to achieve the desired clinical outcomes.
This paper presents the design, fabrication, and experimental validation of a photoacoustic (PA) imaging probe for robotic surgery. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. When equipped with a PA probe, a surgical robot can provide intraoperative guidance to the operating physician, alerting them of the presence of vital substrate anatomy (e.g., nerves or blood vessels) invisible to the naked eye. Our probe is designed to work with the da Vinci surgical system to produce three-dimensional PA images: We propose an approach wherein the robot provides Remote Center-of-Motion (RCM) scanning across a region of interest, and successive PA tomographic images are acquired and interpolated to produce a three-dimensional PA image. To demonstrate the accuracy of the PA guidance in scanning 3D tomography actuated by the robot, we conducted an experimental study that involved the imaging of a multi-layer wire phantom. The computed Target Registration Error (TRE) between the acquired PA image and the phantom was 1.5567±1.3605 mm. The ex vivo study demonstrated the function of the proposed laparoscopic device in 3D vascular detection. These results indicate the potential of our PA system to be incorporated into clinical robotic surgery for functional anatomical guidance.
Teleoperated robotic technology has a great potential in delivering healthcare in alternative ways from in-person encounters such as surgery, diagnosis, and nursing with specialists remotely controlled. Most teleoperated procedures heavily rely on visual feedback from cameras to observe the situation. While ideally, the camera position and orientation should be optimized adaptively depending on the task and circumstance, the view adjustment by the operator is required either by dedicating concentration on camera control or pausing during the adjustment. Then, there is the demand for a more intuitive telepresence method to improve the efficiency and performance of the remote operation. This paper proposes a hands-free approach to control the camera view for an improved telepresence experience. The system comprises an RGBD camera mounted on a robotic arm and a motion tracking virtual reality (VR) head mount display (HMD) maps the human head and upper-body motion to the robotic arm for the immersive teleoperation task. Based on this setup, an Augmented Head Motion Mapping (AHMM) mode is introduced. Wherein this mode, the user can decide to control the camera following the head motion directly or following the remote center of motion (RCM) to the target location so that the reachable visual field can be expanded. Through the user study with seven subjects, we evaluated the proposed method compared with other conventional methods in terms of the reachable visual field, control intuitiveness, and task efficiency. The possibility of further enlarging the reachable visual field by introducing the motion scaling factor is investigated through the simulation. The result successfully demonstrated that an operator using the proposed system could examine a larger area on the given object within a similar amount of time with limited training.
Office-based endoscopic laser surgery is an increasingly popular option for the treatment of many benign and premalignant tumors of the vocal folds. While these procedures have been shown to be generally safe and effective, recent clinical studies have revealed that there are a number of challenging locations inside the larynx where laser light cannot be easily delivered due to line-of-sight limitations. In this paper, we explore whether these challenges can be overcome through the use of side-firing laser fibers. Our study is conducted in simulation, using three-dimensional models of the human larynx generated from X-ray microtomography scans. Using computer graphics techniques (ray-casting), we simulate the application of laser pulses with different types of laser fibers and compare the total anatomical coverage attained by each fiber. We consider four fiber types: a traditional “forward-looking” fiber - not unlike the ones currently used in clinical practice - and three side-firing fibers that emit light at an angle of 45, 70, and 90 degrees, respectively. Results show that side-firing fibers enable a ∼70% increase in accessible anatomy compared to forward-looking fibers.
Cholesteatomas are benign lesions that form in the middle ear (ME). They can cause debilitating side effects including hearing loss, recurrent ear infection and drainage, and balance disruption. The current approach for positively identifying cholesteatomas requires intraoperative visualization either by lifting the ear drum or transmitting an endoscope through the ear canal and tympanic membrane – procedures which are typically done in and operating room with the patient under general anesthesia. We are developing a novel endoscope that can be inserted trans-nasally and could potentially be used in an outpatient setting allowing clinicians to easily detect and visualize cholesteatomas and other middle ear conditions. A crucial part of designing this device is determining the degrees of freedom necessary to visualize the regions of interest in the middle ear space. To permit virtual evaluation of scope design, in this work we propose to create a library of models of the most difficult to visualize region of the middle ear, the retrotympanum (RT), which is located deep and posterior to the tympanic membrane. We have designed a semi-automated atlas-based approach for segmentation of the RT. Our approach required 2-3 minutes of manual interaction for each of 20 cases tested. Each result was verified to be accurate by an experienced otologist. These results show the method is efficient and accurate enough to be applied to a large scale dataset. We also created a statistical shape model from the resulting segmentations that can be used to synthesize new plausible RT shapes for comprehensive virtual evaluation of endoscope designs and show that it can represent new RT shapes with average errors of 0.5 mm.
Miniature infrared cameras have recently come to market in a form factor that facilitates packaging in endoscopic or other minimally invasive surgical instruments. If absolute temperature measurements can be made with these cameras, they may be useful for non-contact monitoring of electrocautery-based vessel sealing, or other thermal surgical processes like thermal ablation of tumors. As a first step in evaluating the feasibility of optical medical thermometry with these new cameras, in this paper we explore how well thermal measurements can be made with them. These cameras measure the raw flux of incoming IR radiation, and we perform a calibration procedure to map their readings to absolute temperature values in the range between 40 and 150 °C. Furthermore, we propose and validate a method to estimate the spatial extent of heat spread created by a cautery tool based on the thermal images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.