SignificanceThe estimation of tissue optical properties using diffuse optics has found a range of applications in disease detection, therapy monitoring, and general health care. Biomarkers derived from the estimated optical absorption and scattering coefficients can reflect the underlying progression of many biological processes in tissues.AimComplex light–tissue interactions make it challenging to disentangle the absorption and scattering coefficients, so dedicated measurement systems are required. We aim to help readers understand the measurement principles and practical considerations needed when choosing between different estimation methods based on diffuse optics.ApproachThe estimation methods can be categorized as: steady state, time domain, time frequency domain (FD), spatial domain, and spatial FD. The experimental measurements are coupled with models of light–tissue interactions, which enable inverse solutions for the absorption and scattering coefficients from the measured tissue reflectance and/or transmittance.ResultsThe estimation of tissue optical properties has been applied to characterize a variety of ex vivo and in vivo tissues, as well as tissue-mimicking phantoms. Choosing a specific estimation method for a certain application has to trade-off its advantages and limitations.ConclusionOptical absorption and scattering property estimation is an increasingly important and accessible approach for medical diagnosis and health monitoring.
SignificancePhotoacoustic imaging (PAI) promises to measure spatially resolved blood oxygen saturation but suffers from a lack of accurate and robust spectral unmixing methods to deliver on this promise. Accurate blood oxygenation estimation could have important clinical applications from cancer detection to quantifying inflammation.AimWe address the inflexibility of existing data-driven methods for estimating blood oxygenation in PAI by introducing a recurrent neural network architecture.ApproachWe created 25 simulated training dataset variations to assess neural network performance. We used a long short-term memory network to implement a wavelength-flexible network architecture and proposed the Jensen–Shannon divergence to predict the most suitable training dataset.ResultsThe network architecture can flexibly handle the input wavelengths and outperforms linear unmixing and the previously proposed learned spectral decoloring method. Small changes in the training data significantly affect the accuracy of our method, but we find that the Jensen–Shannon divergence correlates with the estimation error and is thus suitable for predicting the most appropriate training datasets for any given application.ConclusionsA flexible data-driven network architecture combined with the Jensen–Shannon divergence to predict the best training data set provides a promising direction that might enable robust data-driven photoacoustic oximetry for clinical use cases.
Members of IPASC have created an open-source library for image reconstruction algorithms that are compatible with the IPASC data format. Within this project, we create a testing framework for the evaluation of image reconstruction algorithms to identify their context-dependent strengths and weaknesses. We develop an open-access dataset comprising both simulated and experimental data to facilitate collaboration among all stakeholders associated with photoacoustic imaging and lower the barrier of entry for new researchers in the field by making the project deliverables available open-source.
Double-integrating-sphere (DIS) measurement is a common method for characterizing phantom and tissue optical properties, but can display low accuracy. To investigate the sources of errors, a digital twin based on Monte Carlo simulations and sphere corrections was built. We compared simulation and measurement results of phantoms with known optical properties, and identified error sources. After minimizing these sources, the average errors were reduced to -1% and 2.44% in the absorption and reduced scattering coefficient estimates respectively, highlighting the potential to achieve high accuracy in optical property estimation using a relatively low-cost measurement approach.
Validating processing algorithms for photoacoustic images is complex due to a gap between simulated and experimental data. To address this challenge, we present a multi-device dataset of well-characterised phantoms and investigate the simulation gap using a supervised calibration of the forward model. We use N=15 phantoms for calibration and systematically compare simulated and experimental data from the remaining N=15 phantoms. Our results highlight the importance of the device geometry, impulse response, and noise for accurate simulation. By reducing the simulation gap and providing an open dataset, our work will contribute to advancing data-driven photoacoustic image processing techniques.
Longitudinal mesoscopic photoacoustic imaging of vascular networks requires accurate image co-registration to assess local changes in growing tumours, but remains challenging due to sparsity of data and scan-to-scan variability. Here, we compared a set of 5 curated co-registration methods applied to 49 pairs of vascular images of mouse ears and breast cancer xenografts. Images were segmented using a generative adversarial network and pairs of images and/or segmentations were fed into the 5 tested algorithms. We show the feasibility of co-registering vascular networks accurately using a range of quality metrics, taking a step towards longitudinal characterization of those complex structures.
SignificancePhotoacoustic imaging (PAI) provides contrast based on the concentration of optical absorbers in tissue, enabling the assessment of functional physiological parameters such as blood oxygen saturation (sO2). Recent evidence suggests that variation in melanin levels in the epidermis leads to measurement biases in optical technologies, which could potentially limit the application of these biomarkers in diverse populations.AimTo examine the effects of skin melanin pigmentation on PAI and oximetry.ApproachWe evaluated the effects of skin tone in PAI using a computational skin model, two-layer melanin-containing tissue-mimicking phantoms, and mice of a consistent genetic background with varying pigmentations. The computational skin model was validated by simulating the diffuse reflectance spectrum using the adding-doubling method, allowing us to assign our simulation parameters to approximate Fitzpatrick skin types. Monte Carlo simulations and acoustic simulations were run to obtain idealized photoacoustic images of our skin model. Photoacoustic images of the phantoms and mice were acquired using a commercial instrument. Reconstructed images were processed with linear spectral unmixing to estimate blood oxygenation. Linear unmixing results were compared with a learned unmixing approach based on gradient-boosted regression.ResultsOur computational skin model was consistent with representative literature for in vivo skin reflectance measurements. We observed consistent spectral coloring effects across all model systems, with an overestimation of sO2 and more image artifacts observed with increasing melanin concentration. The learned unmixing approach reduced the measurement bias, but predictions made at lower blood sO2 still suffered from a skin tone-dependent effect.ConclusionPAI demonstrates measurement bias, including an overestimation of blood sO2, in higher Fitzpatrick skin types. Future research should aim to characterize this effect in humans to ensure equitable application of the technology.
Optical and acoustic imaging techniques enable noninvasive visualization of structural and functional tissue properties. Data-driven approaches for quantification of these properties are promising, but they rely on highly accurate simulations due to the lack of ground truth knowledge. We recently introduced the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit that has quickly been adopted by the community in the context of the IPASC consortium for standardized reconstruction. We present new developments in the toolkit including e.g. improved tissue and device modeling and provide an outlook on future directions aiming at improving the realism of simulations.
Machine learning-based approaches have shown promise for quantitative photoacoustic oximetry, however, the impact of learned methods is hampered by challenges of usability and generalisability, caused by the strong dependence of learned methods on the training data sets. To address these issues we developed a deep learning-based approach with higher flexibility. The method is trained on a suite of training data sets representing a range of general assumptions. The performance is systematically compared to linear unmixing methods and is validated on in silico, in vitro, and in vivo data representing different use cases.
KEYWORDS: Standards development, Photoacoustic spectroscopy, Photoacoustic imaging, Data acquisition, Outreach programs, Image acquisition, Data modeling, Data analysis, Animal model studies
IPASC organized a roadmapping exercise in 2022 encompassing over 50 participants, which identified eight barriers to clinical translation of PAI: 1) scientific and technological limitations; 2) gaps between technological push and clinical pull; 3) lack of interface with existing standards; 4) poor uptake of phantoms; 5) limited community outreach; 6) poor complementarity of animal models with clinical testing; 7) translation of data-driven methods; and 8) quantitative photoacoustics. Participants defined the scope of each barrier and compared the current state against envisioned goals and outcomes. The resulting roadmaps that define IPASC deliverables in standards development and community engagement will be presented.
KEYWORDS: Photoacoustic spectroscopy, Standards development, Data conversion, Photoacoustic imaging, Interfaces, Imaging systems, Data storage, Data analysis, Data acquisition, Computer programming
IPASC has recently published a data format through a consensus-based process which includes a defined metadata structure that describes: (1) PAI system design parameters such as the illumination and detection geometry; (2) container format metadata; and (3) data acquisition including the optical wavelengths, sampling frequency, or timestamps. The container format is designed to store time-series data and internal quality control mechanisms are included to ensure completeness and consistency. Furthermore, a Python-based open-source software application programming interface (API) was developed to facilitate using the IPASC data format and we aim to partner with prospective users to make improvements.
IPASC has initiated the creation of an open-source library for image reconstruction algorithms that are compatible with the IPASC data format. The goals of the project are to: (1) create a testing framework for evaluation of newly developed image reconstruction algorithms to identify their context-dependent strengths and weaknesses; (2) enable insight into algorithm behavior under different conditions; (3) develop an open-access dataset comprising both simulated and experimental data; (4) facilitate collaboration among all stakeholders associated with photoacoustic imaging; and (5) accelerate developments in the field by making the project deliverables available open-source, lowering the barrier of entry for new researchers.
We developed an open-source python toolkit for photoacoustic image (PAI) reconstruction and processing. The toolkit implements GPU-accelerated processing algorithms including preprocessing, image reconstruction (backprojection and model-based) and multispectral analysis (linear spectral unmixing and learned spectral decolouring). We implemented methods for the advanced analysis of longitudinal PA data, including standardised analysis of oxygen-enhanced and dynamic contrast enhanced MSOT data. The toolkit currently works with pre-clinical, clinical and simulated PA systems, integrating with the IPASC open data format, simulated datasets from the SIMPA toolkit and iThera Medical MSOT devices. It can easily be extended to support other algorithms and systems.
Significance: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings.
Aim: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards.
Approach: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA’s module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models.
Results: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations.
Conclusions: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
Previous work on 3D freehand photoacoustic imaging has focused on the development of specialized hardware or the use of tracking devices. In this work, we present a novel approach towards 3D volume compounding using an optical pattern attached to the skin. By design, the pattern allows context-aware calculation of the PA image pose in a pattern reference frame, enabling 3D reconstruction while also making the method robust against patient motion. Due to its easy handling optical pattern-enabled context-aware PA imaging could be a promising approach for 3D PA in a clinical environment.
Photoacoustic imaging (PAI) is an emerging medical imaging modality that provides high contrast and spatial resolution. A core unsolved problem to effectively support interventional healthcare is the accurate quantification of the optical tissue properties, such as the absorption and scattering coefficients. The contribution of this work is two-fold. We demonstrate the strong dependence of deep learning-based approaches on the chosen training data and we present a novel approach to generating simulated training data. According to initial in silico results, our method could serve as an important first step related to generating adequate training data for PAI applications.
Photoacoustics Imaging is an emerging imaging modality enabling the recovery of functional tissue parameters such as blood oxygenation. However, quantifying these still remains challenging mainly due to the non-linear influence of the light fluence which makes the underlying inverse problem ill-posed. We tackle this gap with invertible neural networks and present a novel approach to quantifying uncertainties related to reconstructing physiological parameters, such as oxygenation. According to in silico experiments, blood oxygenation prediction with invertible neural networks combined with an interactive visualization could serve as a powerful method to investigate the effect of spectral coloring on blood oxygenation prediction tasks.
The current lack of uniformity in photoacoustic imaging (PAI) data formats hampers inter-device data exchange and comparison. Based on the proposed standardized metadata format of the International Photoacoustic Standardization Consortium (IPASC), IPASC’s Data Acquisition and Management theme has now developed a prototype python software to transform photoacoustic time series data from proprietary data formats into a standardised HDF5 format. The tool provides a centralised application programming interface for vendor-specific conversion module implementation and is available open-source under a commercially friendly licence (BSD-3). By providing this tool, the IPASC hopes to facilitate PAI data management, thereby supporting future developments of the technology.
To accelerate the clinical translation of photoacoustic (PA) imaging, IPASC aims to provide open and publicly available reference datasets for testing of data reconstruction and spectral processing algorithms in a widely accepted data format. The International Photoacoustic Standardisation Consortium (IPASC) has identified and agreed on a list of essential metadata parameters to describe raw time series PA data and used it to develop an initial prototype of a standardized PA data format. We aim to apply the proposed format in an open database that provides reference datasets for testing of processing algorithms, thereby facilitating and advancing PA research and translation.
One of the major applications of multispectral photoacoustic imaging is the recovery of functional tissue properties with the goal of distinguishing different tissue classes. In this work, we tackle this challenge by employing a deep learning-based algorithm called learned spectral decoloring for quantitative photoacoustic imaging. With the combination of tissue classification, sO2 estimation, and uncertainty quantification, powerful analyses and visualizations of multispectral photoacoustic images can be created. Consequently, these could be valuable tools for the clinical translation of photoacoustic imaging.
The International Photoacoustic Standardisation Consortium (IPASC) emerged from SPIE 2018, established to drive consensus on photoacoustic system testing. As photoacoustic imaging (PAI) matures from research laboratories into clinical trials, it is essential to establish best-practice guidelines for photoacoustic image acquisition, analysis and reporting, and a standardised approach for technical system validation. The primary goal of the IPASC is to create widely accepted phantoms for testing preclinical and clinical PAI systems. To achieve this, the IPASC has formed five working groups (WGs). The first and second WGs have defined optical and acoustic properties, suitable materials, and configurations of photoacoustic image quality phantoms. These phantoms consist of a bulk material embedded with targets to enable quantitative assessment of image quality characteristics including resolution and sensitivity across depth. The third WG has recorded details such as illumination and detection configurations of PAI instruments available within the consortium, leading to proposals for system-specific phantom geometries. This PAI system inventory was also used by WG4 in identifying approaches to data collection and sharing. Finally, WG5 investigated means for phantom fabrication, material characterisation and PAI of phantoms. Following a pilot multi-centre phantom imaging study within the consortium, the IPASC settled on an internationally agreed set of standardised recommendations and imaging procedures. This leads to advances in: (1) quantitative comparison of PAI data acquired with different data acquisition and analysis methods; (2) provision of a publicly available reference data set for testing new algorithms; and (3) technical validation of new and existing PAI devices across multiple centres.
As a growing number of research groups exploit photoacoustic imaging (PAI), there is an increasing need to establish common standards for photoacoustic data and images in order to facilitate open access, use, and exchange of data between different groups. As part of a working group within the International Photoacoustic Standardisation Consortium (IPASC), we established a minimal list of metadata parameters necessary to ensure inter-group interpretability of image datasets. To this end, we propose that photoacoustic images should at least contain metadata information regarding acquisition wavelengths, pulse-to-pulse laser energy, and information regarding transducer design and illumination geometry. We also suggest recommendations for a standardized data format for both raw time series data as well as processed photoacoustic image data. Specifically, we recommend to use HDF5 as the standard data format for raw time series data, because it is a widely used open and scalable format that enables fast access times. To support long-term clinical translation of photoacoustics we propose to extend DICOM, the prevailing standardized medical image format, to officially support PA images. An international data format standard for photoacoustics will be an important first step towards accelerated system development by facilitating inter-group data exchange and inter-device performance comparison. This effort will thus form a foundation to integrate basic research with clinical translation of PAI.
Multispectral photoacoustic (PA) imaging is a prime modality to monitor hemodynamics and changes in blood oxygenation (sO2). Although sO2 changes can be an indicator of brain activity both in normal and in pathological conditions, PA imaging of the brain has mainly focused on small animal models with lissencephalic brains. Therefore, the purpose of this work was to investigate the usefulness of multispectral PA imaging in assessing sO2 in a gyrencephalic brain. To this end, we continuously imaged a porcine brain as part of an open neurosurgical intervention with a handheld PA and ultrasonic (US) imaging system in vivo. Throughout the experiment, we varied respiratory oxygen and continuously measured arterial blood gases. The arterial blood oxygenation (SaO2) values derived by the blood gas analyzer were used as a reference to compare the performance of linear spectral unmixing algorithms in this scenario. According to our experiment, PA imaging can be used to monitor sO2 in the porcine cerebral cortex. While linear spectral unmixing algorithms are well-suited for detecting changes in oxygenation, there are limits with respect to the accurate quantification of sO2, especially in depth. Overall, we conclude that multispectral PA imaging can potentially be a valuable tool for change detection of sO2 in the cerebral cortex of a gyrencephalic brain. The spectral unmixing algorithms investigated in this work will be made publicly available as part of the open-source software platform Medical Imaging Interaction Toolkit (MITK).
KEYWORDS: Reconstruction algorithms, Photoacoustic spectroscopy, Signal to noise ratio, Ultrasonography, Transducers, Image resolution, Pulsed laser operation, Chromophores, In vitro testing, In vivo imaging
Reconstruction of photoacoustic images acquired with clinical ultrasound transducers is traditionally performed using the delay and sum (DAS) beamforming algorithm. Recently, the delay multiply and sum (DMAS) beamforming algorithm has been shown to provide increased contrast, signal to noise ratio (SNR) and resolution in PA imaging. The main reason for the continued use of DAS beamforming in photoacoustics is its linearity in reconstructing the PA signal to the initial pressure generated by the absorbed laser pulse. This is crucial for the identification of different chromophores in multispectral PA applications and DMAS has not yet been demonstrated to provide this property. Furthermore, due to its increased computational complexity, DMAS has not yet been shown to work in real time.
We present an open-source real-time variant of the DMAS algorithm which ensures linearity of the reconstruction while still providing increased SNR and therefore enables use of DMAS for multispectral PA applications. This is demonstrated in vitro and in vivo. The DMAS and reference DAS algorithms were integrated in the open-source Medical Imaging Interaction Toolkit (MITK) and are available to the community as real-time capable GPU implementations.
KEYWORDS: Software development, Blood, Photoacoustic spectroscopy, In vivo imaging, Ultrasonography, Imaging systems, Scanners, Medical imaging, Control systems, Ultrasonics
Photoacoustic (PA) systems based on clinical linear ultrasound arrays have become increasingly popular in translational PA research. Such systems can more easily be integrated in a clinical workflow due to the simultaneous access to ultrasonic imaging and their familiarity of use to clinicians. In contrast to more complex setups, handheld linear probes can be applied to a large variety of clinical use cases. However, most translational work with such scanners is based on proprietary development and as such not accessible to the community.
In this contribution, we present a custom-built, hybrid, multispectral, real-time photoacoustic and ultrasonic imaging system with a linear array probe that is controlled by software developed within the highly customisable and extendable open-source software platform Medical Imaging Interaction Toolkit (MITK). Our software offers direct control of both the laser and the ultrasonic system and may thus serve as a starting point for various translational research and development. To demonstrate the extensibility of our system, we developed an open-source software plugin for real-time in vivo blood oxygenation measurements. Blood oxygenation is estimated by spectral unmixing of hemoglobin chromophores. The performance is demonstrated on in vivo measurements of the common carotid artery as well as peripheral extremity vessels of healthy volunteers.
KEYWORDS: Sensors, Monte Carlo methods, Image processing, Photoacoustic spectroscopy, Reconstruction algorithms, Computer simulations, Error analysis, Data modeling, Tissues, Medical imaging
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.