Significance: The proposed binary tomography approach was able to recover the vasculature structures accurately, which could potentially enable the utilization of binary tomography algorithm in scenarios such as therapy monitoring and hemorrhage detection in different organs.
Aim: Photoacoustic tomography (PAT) involves reconstruction of vascular networks having direct implications in cancer research, cardiovascular studies, and neuroimaging. Various methods have been proposed for recovering vascular networks in photoacoustic imaging; however, most methods are two-step (image reconstruction and image segmentation) in nature. We propose a binary PAT approach wherein direct reconstruction of vascular network from the acquired photoacoustic sinogram data is plausible.
Approach: Binary tomography approach relies on solving a dual-optimization problem to reconstruct images with every pixel resulting in a binary outcome (i.e., either background or the absorber). Further, the binary tomography approach was compared against backprojection, Tikhonov regularization, and sparse recovery-based schemes.
Results: Numerical simulations, physical phantom experiment, and in-vivo rat brain vasculature data were used to compare the performance of different algorithms. The results indicate that the binary tomography approach improved the vasculature recovery by 10% using in-silico data with respect to the Dice similarity coefficient against the other reconstruction methods.
Conclusion: The proposed algorithm demonstrates superior vasculature recovery with limited data both visually and based on quantitative image metrics.
KEYWORDS: Image restoration, Acquisition tracking and pointing, Transducers, Photoacoustic tomography, Numerical simulations, In vivo imaging, Data acquisition, Acoustics, Photoacoustic spectroscopy
The recovery of the initial pressure rise distribution tends to be an ill-posed problem in the presence of noise and when limited independent data is available, necessitating regularization. The standard regularization schemes include Tikhonov, l1 -norm, and total-variation. These regularization schemes weigh the singular values equally irrespective of the noise level present in the data. This work introduces a fractional framework to weigh the singular values with respect to a fractional power. This fractional framework was implemented for Tikhonov, l1-norm, and total-variation regularization schemes. The fractional framework outperformed the standard regularization schemes by 54% in terms of observed contrast/signal-to-noise-ratio.
KEYWORDS: Signal to noise ratio, Data modeling, Sensors, Image resolution, Photoacoustic tomography, In vivo imaging, Acoustics, Model-based design, Tissues, Tomography
Photoacoustic tomography tends to be an ill-conditioned problem with noisy limited data requiring imposition of regularization constraints, such as standard Tikhonov (ST) or total variation (TV), to reconstruct meaningful initial pressure rise distribution from the tomographic acoustic measurements acquired at the boundary of the tissue. However, these regularization schemes do not account for nonuniform sensitivity arising due to limited detector placement at the boundary of tissue as well as other system parameters. For the first time, two regularization schemes were developed within the Tikhonov framework to address these issues in photoacoustic imaging. The model resolution, based on spatially varying regularization, and fidelity-embedded regularization, based on orthogonality between the columns of system matrix, were introduced. These were systematically evaluated with the help of numerical and in-vivo mice data. It was shown that the performance of the proposed spatially varying regularization schemes were superior (with at least 2 dB or 1.58 times improvement in the signal-to-noise ratio) compared to ST-/TV-based regularization schemes.
Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them.
As limited data photoacoustic tomographic image reconstruction problem is known to be ill-posed, the iterative reconstruction methods were proven to be effective in terms of providing good quality initial pressure distribution. Often, these iterative methods require a large number of iterations to converge to a solution, in turn making the image reconstruction procedure computationally inefficient. In this work, two variants of vector polynomial extrapolation techniques were deployed to accelerate two standard iterative photoacoustic image reconstruction algorithms, including regularized steepest descent and total variation regularization methods. It is shown using numerical and experimental phantom cases that these extrapolation methods that are proposed in this work can provide significant acceleration (as high as 4.7 times) along with added advantage of improving reconstructed image quality.
Photoacoustic (PA) signals collected at the boundary of tissue are always band-limited. A deep neural network was proposed to enhance the bandwidth (BW) of the detected PA signal, thereby improving the quantitative accuracy of the reconstructed PA images. A least square-based deconvolution method that utilizes the Tikhonov regularization framework was used for comparison with the proposed network. The proposed method was evaluated using both numerical and experimental data. The results indicate that the proposed method was capable of enhancing the BW of the detected PA signal, which inturn improves the contrast recovery and quality of reconstructed PA images without adding any significant computational burden.
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter’s method, we call Analytical Showalter’s Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
The model-based image reconstruction techniques for photoacoustic (PA) tomography require an explicit regularization. An error estimate (η2) minimization-based approach was proposed and developed for the determination of a regularization parameter for PA imaging. The regularization was used within Lanczos bidiagonalization framework, which provides the advantage of dimensionality reduction for a large system of equations. It was shown that the proposed method is computationally faster than the state-of-the-art techniques and provides similar performance in terms of quantitative accuracy in reconstructed images. It was also shown that the error estimate (η2) can also be utilized in determining a suitable regularization parameter for other popular techniques such as Tikhonov, exponential, and nonsmooth (ℓ1 and total variation norm based) regularization methods.
KEYWORDS: Tissues, Sensors, Near infrared, Animal model studies, Signal attenuation, Monte Carlo methods, Absorption, Data modeling, Optical properties, Diffusion
The attenuation of near-infrared (NIR) light intensity as it propagates in a turbid medium like biological tissue is described by modified the Beer–Lambert law (MBLL). The MBLL is generally used to quantify the changes in tissue chromophore concentrations for NIR spectroscopic data analysis. Even though MBLL is effective in terms of providing qualitative comparison, it suffers from its applicability across tissue types and tissue dimensions. In this work, we introduce Lambert-W function-based modeling for light propagation in biological tissues, which is a generalized version of the Beer–Lambert model. The proposed modeling provides parametrization of tissue properties, which includes two attenuation coefficients μ0 and η. We validated our model against the Monte Carlo simulation, which is the gold standard for modeling NIR light propagation in biological tissue. We included numerous human and animal tissues to validate the proposed empirical model, including an inhomogeneous adult human head model. The proposed model, which has a closed form (analytical), is first of its kind in providing accurate modeling of NIR light propagation in biological tissues.
An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The proposed method was compared with the recently established data-resolution matrix-based approach for optimal choice of independent measurements and shown, using simulated and experimental gelatin phantom data sets, to be superior as it does not require an optimal regularization parameter for providing the same information.
In this report, we present a Born-ratio type of data normalization for reconstruction of initial acoustic pressure
distribution in photoacoustic tomography (PAT). The normalized Born-ratio type of data is obtained as a ratio of
photoacoustic pressure obtained with tissue sample in a coupling medium to the one obtained using purely coupling
medium. It is shown that this type of data normalization improves the quantitation (intrinsic contrast) of the
reconstructed images in comparison to the traditional techniques (unnormalized) that are currently available in PAT.
Studies are carried out using various tissue samples. The robustness of the proposed method is studied at various noise
levels added to the collected data. The improvement in quantitation can enable accurate estimation of pathophysiological
parameter (optical absorption coefficient, μa) of tissue sample under investigation leading to better
sensitivity in PAT.
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder–Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations.
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ ℓ2-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of ℓ1-norm-based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional ℓ2-based techniques. Moreover, it is shown that the proposed ℓ1-based technique is computationally efficient compared to its counterpart (ℓ2-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13,377.
Recent interest in the use of dual modality imaging in the field of optical Near Infrared (NIR) Tomography has
increased, specifically with use of structural information, from for example, MRI. Although MRI images provide high
resolution structural information about tissue, they lack the contrast and functional information needed to investigate
physiology, whereas NIR data has been established as a high contrast imaging modality, but one which suffers from low
resolution. To this effect, the use of dual modality data has been shown to increase the qualitative and quantitative
accuracy of clinical information that can be obtained from tissue. Results so far have indicated that providing accurate apriori
structural information is available, such dual modality imaging techniques can be used for the detection and
characterization of breast cancer in-vivo, as well as the investigation of brain function and physiology in both human
and small animal studies.
Although there has been much interest and research into the best suitable and robust use of a-priori structural
information within the reconstruction of optical properties of tissue, little work has been done into the investigation of
how much accuracy is needed from the structural MRI images in order to obtain the most clinically reliable information.
In this paper, we will present and demonstrate the two most common application of a-priori information into image
reconstruction, namely soft and hard priori. The effect of inaccuracies of the a-priori structural information within the
reconstructed NIR images are presented showing that providing that the error of the a-priori information is within 20%
in terms of size and location, adequate NIR images can be reconstructed.
Manipulation of Interstitial Fluid Pressure (IFP) has clinical potential when used in conjunction with near infrared
spectroscopy for detection and characterization of breast cancer. IFP is a function of blood chemistry, vessel
microanatomy, mechanical properties of the tissue, tissue geometry, and external force. IFP has been demonstrated
higher in tumors than normal tissue, and it has been suggested that increased IFP can lead to changes in near infrared
absorbing and scattering coefficients. While it is known that external forces can increase IFP, the relationship of force to
IFP in a viscoelastic, hyperelastic solid such as tissue is complex. Fluid pressure measurements were taken in gelatin
phantoms of equivalent elastic modulus to adipose and glandular tissues of the breast using a WaveMap pressure
transducer. 3D pressure maps were obtained for the volumes of the phantoms with an externally applied force of
10mmHg, demonstrating the contribution of shear stress, non-linear mechanical properties, and tissue geometry. Linear
elastic computational models were formulated for breast tissue with and without an inclusion of tumor-like mechanical
properties. Comparison of experimental and computational model data indicates that light external pressure can lead to
heterogeneous IFP distribution within tissues and increased IFP gradients around tumor-like inclusions.
Two techniques to regularize the diffuse optical tomography inverse problem were compared for a variety of simulated
test domains. One method repeats the single-step Tikhonov approach until a stopping criteria is reached, regularizing the
inverse problem by scaling the maximum of the diagonal of the inversion matrix with a factor held constant throughout
the iterative reconstruction. The second method, a modified Levenberg-Marquardt formulation, uses an identical
implementation but reduces the factor at each iteration. Four test geometries of increasing complexity were used to test
the performance of the two techniques under a variety of conditions including varying amounts of data noise, different
initial parameter estimates, and different initial values of the regularization factor. It was found that for most cases
tested, holding the scaling factor constant provided images that were more robust to both data noise and initial
homogeneous parameter estimates. However, the results for a complex test domain that most resembled realistic tissue
geometries were less conclusive.
Cramer-Rao Bounds (CRB) for the expected variance in the parameter space were examined for Diffuse Optical
Tomography (DOT), to define the lower bound (CRLB) of an ideal system. The results show that the relative
standard deviation in the optical parameter estimate follows an inverse quadratic function with respect to signal
to noise ratio (SNR). The CRLB was estimated for three methods of including spatial constraints. The CRLB
estimate decreased by a factor of 10 when parameter reduction using spatial constraints (hard-priors) was enforced
whereas, inclusion of spatial-priors in the regularization matrix (soft-priors) decreased the CRLB estimate only
by a factor of 4. The maximum reduction in variance from the use of spatial-priors, occurred in the background of
the imaging domain as opposed to localized target regions. As expected, the variance in the recovered properties
increased as the number of parameters to be estimated increased. Additionally, increasing SNR beyond a certain
point did not influence the outcome of the optical property estimation when prior information was available.
KEYWORDS: 3D metrology, Optical fibers, Near infrared, Absorption, 3D image reconstruction, 3D image processing, 3D modeling, Optical tomography, Tissue optics, Data acquisition
The image resolution and contrast in Near-Infrared (NIR) tomographic image reconstruction is in part affected by the number of available boundary measurements. In the presented study, singular-value decomposition (SVD) of the Jacobian has been used to find the benefit of the total number of measurements that can be obtained in a two-dimensional (2D) and three-dimensional (3D) problem. Reconstructed images show an increase in improvement in the reconstructed images when the number of measurements are increased, with a central anomaly showing more improvement as compared to a more superficial one. It is also shown that given a 2D model of the domain, the increase in amount of useful data drops exponentially with an increase in total number of measurements. For 3D NIR tomography use of three fundamentally different data collection strategies are discussed and compared. It is shown that given a 3D NIR problem, using three planes of data gives more independent information compared to the single plane of data. Given a three planes of data collection fibers, it is shown that although more data can be collected in the out-of-plane data collection strategy as compared to the only in-plane case, the addition of new data does not increase image accuracy dramatically where as it increases the data collection and computation time.
While near-infrared tomography has advanced considerably over the past decade, key technological designs still limit what can be achieved, especially in terms of imaging acquisition speed. One of these fundamental limitations is the requirement that the source light be delivered sequentially or through frequency encoding of the time signal. Sequential delivery inherently limits the speed at which images can be acquired. Modulation frequency-dependent encoding of the sources solves the problem by allowing sources near the same location to be turned on simultaneously, thereby improving the speed for acquisition, but suffers from dynamic range problems. In this study, we demonstrate an alternative parallel source implementation approach which uses spectral wavelength encoding of the source. This new technique allows many sources to be input into the tissue at the same time, as long as the spectrally encoded signals can be decoded at the output. To test the implementation of this approach, 8 single-mode laser diodes of wavelengths distributed within a narrow range of 10 nm are used, and the lights are all input into tissue phantom simultaneously. On the detection side, a high-resolution spectrometer is used to spatially spread out the signals to facilitate parallel detection of the signal from each spectrally-encoded source. This robust approach allows rapid parallel sampling of all sources at all detection locations. The implementation of this technique in a NIR tomography application is examined, and the preliminary results of video-rate imaging at 30 Hz is presented.
An iterative method for the reconstruction of optical properties of a low-scattering object, which uses a Monte-Carlo-based forward model, is developed. A quick way to construct and update the Jacobian needed to reconstruct a discretized object, based on the perturbation Monte-Carlo (PMC) approach, is demonstrated. The projection data is handled either one view at a time, using a propagation-backpropagation (PBP) strategy where the dimension of the inverse problem and consequently the computation time are smaller, or, when this approach failed, using all the views simultaneously with a full dataset. The main observations and results are as follows. 1. Whereas the PMC gives an accurate and quick method for constructing the Jacobian the same, when adapted to update the computed projection data, the data are not accurate enough for use in the iterative reconstruction procedure leading to convergence. 2. The a priori assumption of the location of inhomogeneities in the object reduces the dimension of the problem, leading to faster convergence in all the cases considered, such as an object with multiple inhomogeneities and data handled one view at a time (i.e., the PBP approach). 3. On the other hand, without a priori knowledge of the location of inhomogeneities, the problem was too ill posed for the PBP approach to converge to meaningful reconstructions when both absorption and scattering coefficients are considered as unknowns. Finally, to bring out the effectiveness of this method for reconstructing low-scattering objects, we apply a diffusion equation-based algorithm on a dataset from one of the low-scattering objects and show that it fails to reconstruct object inhomogeneities.
In this work we show that for imaging tissue with low scattering background, reconstruction algorithms using derivatives calculated through perturbation Monte-Carlo method worked well whereas diffusion equation based methods failed. An easy way to estimate the Jacobian using analytical expression obtained from perturbation Monte-Carlo method is demonstrated. We have successfully reconstructed both absorption- and scattering inhomogeneities in objects, which fall under transport equation regime. Experimental data gathered from tissue-equivalent phantoms with low-scattering coefficient background and absorption and/or scattering inhomogeneities are reconstructed using the above method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.