We analyze fixation and saccade using an extension of a convolutional neural network (CNN) model and compare the results to a conventional modeling method (gazeNet). Unlike the conventional method, ours can easily be applied to realtime analysis of eye movements during data acquisition since it can feed the results back to trainees for image interpretation in real time. Eye movement data was divided into “fixation” and “saccade” sections using the open data sets, “Lund2013” and “GazeCom,” that are available via the internet and which can be used as validation data for interpreting eye movements while viewing medical images. Images to be input into our Deep CNN model, DCNN, were created by drawing path lines from 12 consecutive gaze points over a period of 0.1 s assuming 120 Hz measurements with appropriate downsampling of the data (500 Hz for Lund2013 and 250 Hz for GazeCom). Our DCNN model was shown to be largely superior to gazeNet, yielding high sensitivity (97.7% for Lund2013 and 98.2% for GazeCom) and specificity (86.4% and 93.8%, respectively). Our findings show that eye movement data classification was generally more accurate for our DCNN model than for the previously reported model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.