We analyze fixation and saccade using an extension of a convolutional neural network (CNN) model and compare the results to a conventional modeling method (gazeNet). Unlike the conventional method, ours can easily be applied to realtime analysis of eye movements during data acquisition since it can feed the results back to trainees for image interpretation in real time. Eye movement data was divided into “fixation” and “saccade” sections using the open data sets, “Lund2013” and “GazeCom,” that are available via the internet and which can be used as validation data for interpreting eye movements while viewing medical images. Images to be input into our Deep CNN model, DCNN, were created by drawing path lines from 12 consecutive gaze points over a period of 0.1 s assuming 120 Hz measurements with appropriate downsampling of the data (500 Hz for Lund2013 and 250 Hz for GazeCom). Our DCNN model was shown to be largely superior to gazeNet, yielding high sensitivity (97.7% for Lund2013 and 98.2% for GazeCom) and specificity (86.4% and 93.8%, respectively). Our findings show that eye movement data classification was generally more accurate for our DCNN model than for the previously reported model.
|