Significance: Functional near-infrared spectroscopy (fNIRS), a well-established neuroimaging technique, enables monitoring cortical activation while subjects are unconstrained. However, motion artifact is a common type of noise that can hamper the interpretation of fNIRS data. Current methods that have been proposed to mitigate motion artifacts in fNIRS data are still dependent on expert-based knowledge and the post hoc tuning of parameters.
Aim: Here, we report a deep learning method that aims at motion artifact removal from fNIRS data while being assumption free. To the best of our knowledge, this is the first investigation to report on the use of a denoising autoencoder (DAE) architecture for motion artifact removal.
Approach: To facilitate the training of this deep learning architecture, we (i) designed a specific loss function and (ii) generated data to mimic the properties of recorded fNIRS sequences.
Results: The DAE model outperformed conventional methods in lowering residual motion artifacts, decreasing mean squared error, and increasing computational efficiency.
Conclusion: Overall, this work demonstrates the potential of deep learning models for accurate and fast motion artifact removal in fNIRS data.
Optical neuroimaging is a promising tool to assess motor skills execution. Especially, functional near-infrared spectroscopy (fNIRS) enables the monitoring of cortical activations in scenarios such as surgical task execution. fNIRS data sets are typically preprocessed to derive a few biomarkers that are used to provide a correlation between cortical activations and behavior. Meanwhile, Deep Learning methodologies have found great utility in the data processing of complex spatiotemporal data for classification or prediction tasks. Here, we report on a Deep Convolutional model that takes spatiotemporal fNIRS data sets as input to classify subjects performing a Fundamentals of Laparoscopic Surgery (FLS) task used in board certification of general surgeons in the United States. This convolutional neural network (CNN) uses dilated kernels paired with multiple stacks of convolution to capture long-range dependencies in the fNIRS time sequence. The model is trained in a supervised manner on 474 FLS trials obtained from seven subjects and assessed independently by stratified-10-fold cross-validation (CV). Results demonstrate that the model can learn discriminatory features between passed and failed trials, attaining 0.99 and 0.95 area under the Receiver Operating Characteristics (ROC) and Precision-Recall curves, respectively. The reported accuracy, sensitivity, and specificity are 97.7%, 81%, and 98.9%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.