Sepsis is responsible for over 50% of hospital deaths. Changes in mitochondrial redox state can be indicative of cellular and organ function. Therefore, a method to evaluate mitochondrial utilization of oxygen continuously is critical. Cytochrome-c oxidase (CCO) is a mitochondrial enzyme that participates in oxidative phosphorylation and interacts with near-infrared light, potentially yielding an optical indicator of cellular metabolism. In this work, we use a computational framework to study the feasibility of utilizing photoplethysmography (PPG) signals for detecting CCO when HbO2 and Hb are present. We use a 3D Monte Carlo model of light absorption and transport in tissue to generate optical readouts in the form of temporal PPG signals corresponding to different physiological states. Furthermore, a machine learning model is trained to predict the CCO redox state from PPG signals containing different CCO and hemoglobin species concentrations.
Measuring Hemoglobin (Hb) levels is required for the assessment of different health conditions, such as anemia, a condition where there are insufficient healthy red blood cells to carry enough oxygen to the body’s tissues. Measuring Hb levels requires the extraction of a blood sample, which is then sent to a laboratory for analysis. This is an invasive procedure that may add challenges to the continuous monitoring of Hb levels. Noninvasive techniques, including imaging and photoplethysmography (PPG) signals combined with machine learning techniques, are being investigated for continuous measurements of Hb. However, the availability of real data to train the algorithms is limited to establishing a generalization and implementation of such techniques in healthcare settings. In this work, we present a computational model based on Monte Carlo simulations that can generate multispectral PPG signals that cover a broad range of Hb levels. These signals are then used to train a Deep Learning (DL) model to estimate hemoglobin levels. Through this approach, valuable insights about the relationships between PPG signals, oxygen saturation, and Hb levels are learned by the DL model. The signals were generated by propagating a source in a volume that contains the skin tissue properties and the target physiological parameters. The source consisted of plane waves using the 660 nm and 890 nm wavelengths. A range of 6 g/dL to 18 dL Hb values was used to generate 468 PPGs to train a Convolutional Neural Network (CNN). The initial results show high accuracy in detecting low levels of Hb. To the best of our knowledge, the complexity of biological interactions involved in measuring hemoglobin levels has yet to be fully modeled. The presented model offers an alternative approach to studying the effects of changes in Hb levels on the PPGs signal morphology and its interaction with other physiological parameters that are present in the optical path of the measured signals.
Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets.
Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.