Many cardiac interventional procedures (e.g., radiofrequency ablation) require fluoroscopy to navigate catheters in veins toward the heart. However, this image guidance method lacks depth information and increases the risks of radiation exposure for both patients and operators. To overcome these challenges, we developed a robotic visual servoing system that maintains visualization of segmented photoacoustic signals from a cardiac catheter tip. This system was tested in two in vivo cardiac catheterization procedures with ground truth position information provided by fluoroscopy and electromagnetic tracking. The 1D root mean square localization errors within the vein ranged 1.63 − 2.28 mm for the first experiment and 0.25 − 1.18 mm for the second experiment. The 3D root mean square localization error for the second experiment ranged 1.24 − 1.54 mm. The mean contrast of photoacoustic signals from the catheter tip ranged 29.8 − 48.8 dB when the catheter tip was visualized in the heart. Results indicate that robotic-photoacoustic imaging has promising potential as an alternative to fluoroscopic guidance. This alternative is advantageous because it provides depth information for cardiac interventions and enables enhanced visualization of the catheter tips within the beating heart.
Abdominal surgeries carry considerable risk of gastrointestinal and intra-abdominal hemorrhage, which could possibly cause patient death. Photoacoustic imaging is one solution to overcome this challenge by providing visualization of major blood vessels during surgery. We investigate the feasibility of in vivo blood vessel visualization for photoacoustic-guided liver and pancreas surgeries. In vivo photoacoustic imaging of major blood vessels in these two abdominal organs was successfully achieved after a laparotomy was performed on two swine. Three-dimensional photoacoustic imaging with a robot-controlled ultrasound (US) probe and color Doppler imaging were used to confirm vessel locations. Blood vessels in the in vivo liver were visualized with energies of 20 to 40 mJ, resulting in 10 to 15 dB vessel contrast. Similarly, an energy of 36 mJ was sufficient to visualize vessels in the pancreas with up to 17.3 dB contrast. We observed that photoacoustic signals were more focused when the light source encountered a major vessel in the liver. This observation can be used to distinguish major blood vessels in the image plane from the more diffuse signals associated with smaller blood vessels in the surrounding tissue. A postsurgery histopathological analysis was performed on resected pancreatic and liver tissues to explore possible laser-related damage. Results are generally promising for photoacoustic-guided abdominal surgery when the US probe is fixed and the light source is used to interrogate the surgical workspace. These findings are additionally applicable to other procedures that may benefit from photoacoustic-guided interventional imaging of the liver and pancreas (e.g., biopsy and guidance of radiofrequency ablation lesions in the liver).
Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, our group is exploring photoacoustic imaging in conjunc- tion with robotic visual servoing, which requires segmentation of catheter tips. However, typical segmentation algorithms are susceptible to reflection artifacts. To address this challenge, signal sources can be identified in the presence of reflection artifacts using a deep neural network, as we previously demonstrated with a linear array ultrasound transducer. This paper extends our previous work to detect photoacoustic sources received by a phased array transducer, which is more common in cardiac applications. We trained a convolutional neural network (CNN) with simulated photoacoustic channel data to identify point sources. The network was tested with an independent simulated validation data set not included during training as well as in vivo data acquired during a pig catheterization procedure. When tested on the independent simulated validation data set, the CNN correctly classified 84.2% of sources with a misclassification rate of 0.01%, and the mean absolute location error of correctly classified sources was 0.095 mm and 0.462 mm in the axial and lateral dimensions, respectively. When applied to in vivo data, the network correctly classified 91.4% of sources with a 7.86% misclassification rate. These results indicate that a CNN is capable of identifying photoacoustic sources recorded by phased array transducers, which is promising for cardiac applications.
Liver surgeries carry considerable risk of injury to major blood vessels, which can lead to hemorrhaging and possibly patient death. Photoacoustic imaging is one solution to enable intraoperative visualization of blood vessels, which has the potential to reduce the risk of accidental injury to these blood vessels during surgery. This paper presents our initial results of a feasibility study, performed during laparotomy procedures on two pigs, to determine in vivo vessel visibility for photoacoustic-guided liver surgery. Delay-and-sum beamforming and coherence-based beamforming were used to display photoacoustic images and differentiate the signal inside blood vessels from surrounding liver tissue. Color Doppler was used to confirm vessel locations. Results lend insight into the feasibility of photoacoustic-guided liver surgery when the ultrasound probe is fixed and the light source is used to interrogate the surgical workspace.
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.