Tracking gaze position from EEG: Exploring the possibility of an EEG‐based virtual eye‐tracker

Abstract Introduction Ocular artifact has long been viewed as an impediment to the interpretation of electroencephalogram (EEG) signals in basic and applied research. Today, the use of blind source separation (BSS) methods, including independent component analysis (ICA) and second‐order blind identification (SOBI), is considered an essential step in improving the quality of neural signals. Recently, we introduced a method consisting of SOBI and a discriminant and similarity (DANS)‐based identification method, capable of identifying and extracting eye movement–related components. These recovered components can be localized within ocular structures with a high goodness of fit (>95%). This raised the possibility that such EEG‐derived SOBI components may be used to build predictive models for tracking gaze position. Methods As proof of this new concept, we designed an EEG‐based virtual eye‐tracker (EEG‐VET) for tracking eye movement from EEG alone. The EEG‐VET is composed of a SOBI algorithm for separating EEG signals into different components, a DANS algorithm for automatically identifying ocular components, and a linear model to transfer ocular components into gaze positions. Results The prototype of EEG‐VET achieved an accuracy of 0.920° and precision of 1.510° of a visual angle in the best participant, whereas an average accuracy of 1.008° ± 0.357° and a precision of 2.348° ± 0.580° of a visual angle across all participants (N = 18). Conclusion This work offers a novel approach that readily co‐registers eye movement and neural signals from a single‐EEG recording, thus increasing the ease of studying neural mechanisms underlying natural cognition in the context of free eye movement.


INTRODUCTION
Electroencephalography (EEG) and EEG-based source imaging have been widely used in basic, clinical, educational, and commercial neuroscience research and applications (Cavanagh, 2018;Dikker et al., 2017;Donchin et al., 2000;He et al., 2018;Khushaba et al., 2013;Lau-Zhu et al., 2019;Luck, 2014;McLoughlin et al., 2014;Niedermeyer & Lopes Da Silva, 2004;Privitera et al., 2022;Protzak & Gramann, 2018;Zink et al., 2016).Although advances have been made in the development of new electrodes for better signal quality (Casson, 2019) and ease of application (Kam et al., 2019;Lopez-Gordo et al., 2014), the presence of large amplitude artifacts associated with eye movement continues to constrain the utility of EEG in the investigation of neural processes underlying natural cognition.To address this issue, typical EEG experimental paradigms require the fixation of the gaze at a specific location prior to and during stimulus presentation.Consequently, EEG-based brain and behavior research is seldom conducted under the natural conditions of free eye movement, with few exceptions (Dimigen et al., 2011;Nikolaev et al., 2016).
Persistent efforts have been made over the past decades to identify, separate, and remove components associated with eye movements and other artifacts from EEG signal (Ranjan et al., 2021;Wallstrom et al., 2004).Blind source separation (BSS) methods, including independent component analysis (ICA) and second-order blind identification (SOBI) (Bell & Sejnowski, 1995;Belouchrani et al., 1997), are typically used to separate and remove artifactual components from the original EEG in order to improve the quality of the neural signal (for reviews, see Tang 2010; Tang et al., 2011;Dimigen et al., 2011;Croft et al., 2000;Mannan et al., 2018;Jiang et al., 2019;Uriguen et al., 2015;Islam et al., 2016;Croft et al., 2005;Jung et al., 2000;Joyce et al., 2004;Bridwell et al., 2018).Today, the identification of ocular components from EEG is achieved through automatic classification or selection algorithms (Sun et al., 2021;Chaumon et al., 2015;Delorme et al., 2001;Dimigen et al., 2011;Joyce et al., 2004;Kierkels et al., 2007;Li et al., 2006;Mognon et al., 2011;Nolan et al., 2010;Plochl et al., 2012;Raduntz et al., 2015;Viola et al., 2009).In one example, Plochl et al. (2012) used eye movement data collected by an eye-tracker to automatically identify and exclude five types of ocular artifact-related components separated by ICA from EEG signal with a true positive success rate of 99%.Moreover, Many ICA selection algorithms, such as SASICA (Chaumon et al., 2015), FASTER (Nolan et al., 2010), and ADJUST (Mognon et al., 2011), have been developed to automatically identify different classes of artifacts (like blinks, vertical and horizontal eye movements, and generic discontinuities) with good success rates.Most previous works used approaches that treat the separated ocular and artifactual components as contaminations to be removed.They shared an implicit assumption that the extracted ocular signals are indeed ocular in origin.Recently, Castellanos and Makarov (2006) showed that ICA-identified ocular components are actually contaminated by neural signals.Such contamination of artifact components by neural signals went unexamined, with few studies providing a systematic and quantitative measurement and statistical analysis of underlying sources of identified artifacts.
Recently, a novel SOBI-discriminant and similarity (DANS)-based approach was developed to quantitatively validate extracted horizontal and vertical eye movement-related ocular artifact components and set an upper bound for any potential "contamination" of these components by remaining neural signals represented by the amount of unexplained variance (Sun et al., 2021).Spatially, these horizontal and vertical eye movement-related components, H and V components (H and V Comps), can be modeled as a pair of equivalent current dipoles with an ocular, nonneural origin.In the majority of participants studied, the neural contamination of SOBI-identified ocular components was less than 5% for both H and V Comps.and constraints associated with using a separate eye-tracker and having to co-register with the EEG.Building on this previous work (Sun et al., 2021), in an earlier feasibility study (Sun et al., 2020), we proposed a novel method for tracking eye movement using only the ocular component from EEG signal: an EEG-based virtual eyetracker (VET).We believe this method can facilitate the study of neural mechanisms supporting visual perception and language functions in the natural context of free eye movement, like Egurtzegi et al. (2022).
Here, we describe our prototype of the EEG-VET to show that it is possible to achieve a reasonable level of accuracy and precision.In this

Participants
Based on the sample size used in previous related works (Brooks et al., 2019;Niehorster et al., 2020)

Experimental tasks
Tasks were administered using E-prime 2.0 software (Psychology Software Tools).Participants were asked to sit in front of a computer screen in a quiet room, and their head position was stabilized by a chinrest.The chinrest was adjusted such that participants' eye level was aligned with the screen center with a viewing distance of 600 mm.The experiment was composed of two tasks: dot calibration task and smooth pursuit task.The dot tracking task was used to estimate model parameters of the EEG-VET, whereas the smooth pursuit task was used to test the performance (i.e., accuracy, precision, and RMSE) (Figure 1a).

Dot tracking task
This task consisted of 32 directed eye movements from a central fixation point to one of 32 black dots presented on a white background, one at a time, at 16 possible locations, with 2 trials for each location (Figure 1b).Among the 16 possible locations, 8 locations were 12.2 • of visual angle away from the screen center, and the other 8 locations were 6.1 • of visual angle away from the screen center.These formed inner and outer rings of short versus long-distance saccade target locations.In both conditions, the eight locations were evenly distributed among all possible directions from the screen center (0  1c).Prior to each trial, participants were told to focus on a fixation symbol "O" displayed in the center of the screen.A trial was initiated by the experimenter when the participant was fixated at the symbol "O."The trial then began with a fixation cross "+" displayed at the center of the screen for a random duration between 500 and 1000 ms.Immediately after the offset of the fixation cross, a target dot appeared at one of the 16 possible locations randomly, to which the participant was instructed to move their eyes as fast and accurately as possible while avoiding blinks.Once a stable fixation on the target dot was detected by the eye-tracker, the dot disappeared and participants were allowed to blink until they saw the fixation symbol "O" again.This task took approximately 3 min.
Prior to each trial, participants looked at the center of the screen where a fixation symbol "O" was displayed for a minimum of 500 ms.A trial was initiated by the experimenter when the participant was fixated at the symbol "O."The trial then began with a fixation cross "+" displayed at the center of the screen for a random duration between 600 and 900 ms.Participants were instructed to remain fixated until the target dot appeared.Immediately after the offset of the fixation cross "+," a pursuit target dot appeared 2 • of visual angle to the left (229, 142.5 mm) or right (275, 142.5 mm) of the fixation cross to prepare the participant to make a smooth pursuit (Braun et al., 2017).
After 256 ms, the fixation cross disappeared, and the pursuit target dot began to move for 512 ms at one of the four possible speeds (1, 5, 9, or 19 • /s) randomly in the opposite direction of the step.After stopping, the pursuit target dot remained stationary for 256 ms.This task took approximately 10-14 min.

Eye movement and EEG data acquisition and processing
Eye movement data were continuously collected at a sampling rate of 60 Hz using an SMI REDn eye-tracker (iMotions), which was attached to a 17.5-inch monitor with a screen resolution of 1280 × 720 pixels.
Because the algorithm of the EEG-VET does not involve the use of eyetracker data, these data were only used to exclude participants who missed >10% of task trials.Only data from the dominant eye were used for eye-tracking data analysis.EEG data were recorded continuously along with eye-tracking data using an eegoMylab system (ANT Neuro) with 64 electrodes placed according to the standard 10-20 system at a sampling rate of 500 Hz, with reference electrodes placed on the mastoids (the bones behind the ears) and electrode impedance kept below 20 kΩ.EEG data were notch-filtered (50 Hz) and high-pass filtered (0.1 Hz) before further processing.E-prime software communicated with the EEG system via a parallel port.

Design of the EEG-VET
Prior algorithm includes an SOBI algorithm (Belouchrani et al., 1993(Belouchrani et al., , 1997) ) to decompose EEG signals, then the a DANS algorithm (Sun et al., 2021) to identify the horizontal and vertical ocular components, and finally a linear regression model (see flowchart in Figure 2) to map the identified ocular components to gaze positions.

2.4.1
Decomposing EEG using SOBI algorithm SOBI was applied to continuous EEG to decompose the 64-channel data into 64 components, each of which corresponds to a recovered putative source contributing to the scalp-recorded EEG signals.
The calculation of mixing matrix A and source signals ŝi (t) is described in Belouchrani et al. (1997).

Identifying ocular components using DANS algorithm
Our previously validated DANS algorithm was applied to automatically identify the H and V Comps recovered by SOBI (Sun et al., 2021).Plochl et al., 2012;Vigário, 1997) or in Figure S1.If an SI is smaller than 0.5, then SI is set to 0, so that the corresponding component is effectively excluded from consideration because a component with SI < 50% of the SI MAX is unlikely to be the single best candidate capturing the horizontal or vertical saccaderelated component.Finally, horizontal and vertical scores (H and V scores), defined as the product of DI and SI, were computed for each component.The components with the largest nonzero H or V scores were the final selected H or V Comps, respectively.DANS selected H and V Comps for each participant are shown in Figure S1, whereas the grand averaged plot can refer to our previous study (Sun et al., 2021).

2.4.3
Construction of individual-specific linear regression models of gaze position using dot track data The scatter diagrams of the relationship between horizontal, vertical SRP amplitudes (Amp SRP_H , Amp SRP_V ) and target gaze positions (X t , Y t ) for each participant are shown in Figure S2A (for scatter diagrams of Amp SRP_H vs. X t ) and Figure S2B (for scatter diagrams of Amp SRP_V vs. Y t ).
It can be seen from the scatter diagrams that SRP amplitude and gaze position show a linear relationship in all participants.Therefore, linear regression analysis was used to fit the relationship.Target positions (X t , Y t ) were defined as dependent variables, whereas SRP amplitudes (Amp SRP_H , Amp SRP_V ) were defined as independent variables.The linear equations can be established as follows: Least-square estimation was used to estimate the slope coefficients of independent variables (a x , b x , a y , and b y ) and intercepts (c x and c y ), which will be applied in performance testing of the EEG-VET.
In summary, the method of the EEG-VET is composed of a SOBI method, a DANS algorithm, and two linear equations.SOBI algorithm was used to separate EEG signals into different components, whereas DANS algorithm was used to automatically identify H and V Comps from SOBI-recovered components.The linear regression analysis was used to establish linear equations between ocular components (Amp SRP_H , Amp SRP_V ) and gaze positions (X t , Y t ).
The performance of the EEG-VET was evaluated by accuracy, precision in a saccadic eye movement, and RMSE between target and measured movement trajectory by EEG-VET in a smooth pursuit task.

Model performance test using smooth pursuit task accuracy and precision
In evaluating the EEG-VET, individual specific model constructed from dot tracking task data was used to (1) predict gaze positions during the left and right saccades that began the smooth pursuit trials; (2) predict gaze positions during the tracking of a moving target in the same smooth pursuit task.In measuring the errors in the two types of prediction, accuracy and precisions were computed for prediction (1) and a RMSE in tracking was computed for prediction (2).
Both accuracy and precision are defined in degrees of visual angle.
Since both target and gaze positions are originally given as distances on the display screen, below we show derivation of the final visual angle measures from the distance measures.The accuracy in mm can be defined as In the equation, X t and Y t are the horizontal and vertical coordinates of a target location, whereas X m and Y m are the average horizontal and vertical coordinates of the measured gaze location by eye-tracker when participants were required to fix their eyes on the target point.
The OnscreenDistance refers to the distance from a location on the screen to the center of the screen.Therefore, the OnscreenDistance of the target point is whereas the OnscreenDistance of average gaze location measured by eye-tracker is The visual angle related to on-screen distance can be calculated via ) xscreen, yscreen refers to the width and height of the screen.
Distance refers to the distance from the screen to the participant's eye, which was 600 mm in this study.The accuracy in degrees of visual angle can be expressed as the deviation in degrees between the actual gaze direction and gaze direction measured by the eye-tracker, with the point of origin determined by the position of the eye.It can be estimated via Precision (in degrees of visual angle) is defined as the ability of the eye-tracker to reliably reproduce the same gaze point measurement.It is calculated via the root mean square from the n successive data points (in degrees of visual angle  i ) between gaze location measured by eye- The OnScreenDistance mi, m(i+1) is the distance between (X mi , Y mi ) and ).Then the precision in degrees of visual angle can be TA B L E 1 Goodness of the individual-specific linear models for using saccade-related potential (Amp SRP_H , Amp SRP_V ) to predict target gaze position (X t , Y t ) for each participant's data from the dot tracking task.

Using both
The accuracy and precision of the EEG-VET in individuals were estimated using 64 trials of saccade eye movements starting from the screen center (252, 142.5 mm) to a left point (229, 142.5 mm) or a right point (275, 142.5 mm) with 32 trials for each.
Meanwhile, we also calculated the accuracy and precision across all participants, which were estimated using the dot tracking tasks across all participants, including 576 trials of the saccade eye movements (16 participants with 32 trials for each) starting from the screen center (252, 142.5 mm) to each target location (see Figure 1b).It is used to compare the performance between EEG-VET and SMI REDn eye-tracker.

Model construction: Goodness of fit of the individual-specific linear repression model in mapping SRP amplitude to gaze position
For each participant, two linear equations with both H and V Comps as independent variables and gaze position X or Y as dependent variables were established reasonable.The correlation coefficients (r) and their respective p values for the linear regression analysis are summarized in Table 1.The correlation coefficient between SRPs (Amp SRP_H, Amp SRP_V ) and X t is .95± .04,which accounts for over 80% (R square) of variance in horizontal gaze coordinates.whereas Y t is .84± .13 which accounts for only over 60% of variance in vertical gaze coordinates.Note also that a majority of the p-values are below .01.The relative inferior fits in the vertical dimension is expected in part because in the current work we did optimize on the removal of blinks and minimizing the correlations between eye lids opening and closing during the blinks and during the up and down eye movement.This is to be further considered in the next stage of the EEG-VET development.
To give a concrete impression of the model goodness in the visual space, we show the group measures of accuracy and precision for each of the 16 target positions in dot tracking task (Fig. 3) contrasting the performances from the EEG-VET (FIg.3a) and the commericial eye tracker used in the present study (Fig. 3b).We that as these accuracy/precision data were calculated across all participants, we do not recommend comparing these with the above or other accuracy/precision results calculated on single participant separately.While the performance of the current method appeared not as good as the eye tracker, they are on the same order of magnitude.Further optimization beyond the present proof of concept is planned to further significantly improve the EEG-VET performance.
Although the accuracy of commercial eye-tracker systems is often reported by manufacturers to be <0.5 • of visual angle, the true gaze point of remote eye-trackers is often found to be worse than 1 • of visual angle, even in controlled environments (Blignaut et al., 2014;Nyström et al., 2013).Besides, it can be seen from Figure 4 that the accuracy/precision of EEG-VET is below twice of SMI REDn eyetracker in most of the target points.Therefore, the accuracy of this first attempt EEG-VET approaches that of commercial eye-trackers.

Performance of EEG-VET in tracking gaze positions in the smooth pursuit task
Using the models fitted with data from the first dot tracking task, we estimated gaze positions during the subsequent task of smooth pursuit.
While we have previously shown that DANS selected H and V components were highly selective for horizontal and vertical movement in the dot tracking task (Sun et al., 2021), here we also show such selectivity in the smooth pursuit task (see an example of a single pursuit trial in one participant in Fig. 4).For this example participant, the estimated gaze positions after the initial horizontal saccadic eye movement to the left and right of the fixation point are shown in Fig. 5a and 5b respectively.No significant Speed by Direction interaction effects were found.It is important to point out that given the nature of the task, the actual gaze position during the pursuit would be falling behind the target movement.Thus the error in the horizontal dimension in part contains this distance lag.This in part explains why onver average Y RMSE appeared apparently smaller than RMSE.

Concept of an EEG-based virtual eye tracker
Building upon previous work (Tang, Sutherland, Mckinney, et al., 2006, Sun et al., 2019, Sun et al., 2021), in the context of horizontal eye movement, the present study demonstrated that it is possible to build individualized predictive models of gaze positions from EEG without the aid of an eye-tracker.The results show that it is possible to obtain an accuracy as high as 0.920 • of visual angle and precision of 1.510 • of visual angle in one participant that is comparable to the accuracy and precision of electrooculogram (EOG)-based eye tracking (Young & Sheena, 1975) and combined EOG-EEG based eye-tracking systems (Joyce et al., 2002).The prediction errors of EEG-VET across all participants are greater than those of a commercialized eye-tracker but no more than twice of it.Such a "resolution" is sufficient for many types of experimental studies and applications.
Ocular artifacts associated with eye movement in EEG present a major problem in cognitive neuroscience and clinical research.Instead of minimizing artifacts by restricting eye movement and removing artifacts from the EEG data as documented in a number of review papers (Croft & Barry, 2000;Islam et al., 2016;Jiang et al., 2019;Mannan et al., 2018;Urigüen & Garcia-Zapirain, 2015), the present study presents a novel approach to this problem by transforming the artifacts into informative signals for predicting gaze positions.Specifically, the EEG-VET method requires a directed saccadic eye movement task, to generate the EEG data to "calibrate" the EEG-VET for predicting gaze positions in subsequent tasks involving eye movement.This work offers a first prototype of the EEG-VET for predicting gaze positions from EEG data alone in horizontal smooth pursuit tasks.

The usage and advantage of the EEG-VET
The method of the EEG-VET is similar to that of commercial eyetrackers as both require data collected during a sequence of directed saccadic eye movement to known positions in order to build the model for estimating gaze positions and then the EEG-VET can be used for tracking gaze position not only during directed eye movement but also during any subsequent tasks within the same EEG session, including free eye movement.The EEG-VET can potentially provide many For the computation time, the calculation speed of SOBI largely depends on the speed of the computing device, the number of EEG channels, the sampling rate, and the recording time length.For using the dot tracking task as the calibration task (calibration is normally used in most of commercialize eye-trackers).We have done a preliminary exploration using data from a single participant and found that it is possible to achieve asymptotic performance within 4 repetitions of the dot positions (Sun et al., 2020).A general recommendation would only be available after further studies with more participants.
For the readers' reference, the calibration task used in the present study took around 2-3 min in this study.Using SOBI to deriving the H and V components from the 64-channel EEG collected during 3 min dot tracking task.The computation time needed by DANS algorithm and linear regression analysis is negligible.Therefore, the time required for model parameter estimation is no more than a few minutes.After model establishment, the computing of gaze position is negligible.

Limitations and future work
As a newly emerging method, the current performance of EEG-VET is not yet as good as the commercially available eye trackers especially

Conclusion
In conclusion, we offered a proof of a concept that the use of a novel EEG-VET can enable the simultaneous collection of EEG and eye movement data without the need for additional hardware.This work is the first to utilize the information contained in typically discarded ocular artifact components in order to track eye movement from EEG signal alone.The current version of EEG-VET can achieve levels of accuracy and precision that almost approach those of commercial eyetrackers.
Further quantitative validation of the components' ocular origin was provided by the systematic modulation of amplitudes of the H and V Comps as a function of saccade directions and distances, indicated by an effect size measure.Therefore, these components are more than mere artifacts but potentially contain useful signals indicating the gaze positions of individual participants.Furthermore, the systematic modulation of H and V Comps' amplitude by gaze direction and distance was found not merely in group statistics but in statistics for each individual.These results raise the possibility that these eye movement-related components recovered from EEG data alone could be used to construct individual-specific models for predicting gaze positions.If this could be achieved, one would be equipped with temporally synchronized neural signals related to cognitive processing and eye movement signals related to behavioral output, thus avoiding the additional demands to use, EEG-VET requires calibration to separate EEG signals into different components, identify ocular components from the recovered components (i.e., H and V Comps), and establish a linear regression model between ocular components and gaze position.The calibration F I G U R E 1 Methods: (a) study design flowchart; (b) the position of target dots in the dot tracking task; (c) dot tracking task; and (d) smooth pursuit task.
The putative sources ŝi (t) are given by ŝi (t) = Wx(t), where W = A −1 .SOBI finds the unmixing matrix W or mixing matrix A through an iterative process that minimizes the sum squared cross correlations between one recovered component at time t and another at time t + , across a set of time delays.The following set of delays,  s (in ms), was chosen to cover a reasonably wide interval without extending beyond the support of the autocorrelation function: H and DI V ) for every SOBI component to index temporal response selectivity.DI H and DI V were defined as the normalized difference between two signed amplitudes of saccade-related potentials (SRPs) in response to long saccades in F I G U R E 2 Flowchart of model calibration and performance testing of the electroencephalogram-based virtual eye-tracker (EEG-VET).EEG-VET mainly consists of three parts: second-order blind identification (SOBI), discriminant and similarity (DANS), and linear regression analysis.It estimates the model parameters (W matrix, coefficients of independent variables, and intercepts) using data from the dot tracking task.The performance of EEG-VET (accuracy, precision, and root mean square error [RMSE]) is assessed using data from both the dot tracking and smooth pursuit tasks.Note that each model is individual specific and does not require the pooling from group data.opposite directions (DI H : SRP Right -SRP Left ; DI V : SRP Up -SRP Down ) in proportion to the maximum difference value (maximum DI = 1.0).In this study, SRP refers to the event-related potentials triggered by target onsets in dot tracking task.SRP amplitude was computed as the median amplitude within 200-1200 ms after target onset with a baseline correction window of 500 ms prior to target onset.DANS also generated two similarity indices (SI H and SI V ) to index the resemblance of a component's scalp projection with the known prototypical scalp projection maps of H and V Comps.SI H and SI V were defined as the normalized correlation between the scalp projection of a SOBI compo-nent and the projection of a prototypical H or V Comps, respectively (normalization by the maximum value; maximum = 1.0).Examples of prototypical maps of H or V Comps can be found in previous work using BSS in ocular artifact removal (such as in

F
I G U R E 3 Comparison of accuracy/precision between electroencephalogram-based virtual eye-tracker (EEG-VET) (red bar) and SMI REDn eye-tracker (blue bar) on 16 target points: (a) the accuracy/precision is combined from the X and Y axes as a general description; (b) the accuracy/precision is separated along X and Y axes.The bars indicate the accuracy value, whereas the error lines indicate the precision value.The average accuracy of EEG-VET across all participants and all target points is 2.05 • Å} 0.93 • , whereas the precision is 4.76 • Å}0.55 • of visual angle.For SMIREDn eye-tracker, the accuracy is 1.19 • Å} 0.67 • ,whereas the precision is 2.23 • Å} 0.61 • of visual angle.This difference is significant (accuracy: t(15) = 5.66, p < .01;precision: t(15) = 13.97,p < .01).The accuracy is better when the target points are in the inner ring than outer ring both the EEG-VET and the commerical eye tracker (EEG-VET: t(7) = 6.23, p < .01;eye tracker: t(7) = 2.71, p = .03).F I G U R E 4 An example illustrating a selective contribution of the Horizontal ocular component to tracking of a moving target while the other neuronal components (line 2,4,5) are insertive to horizontal eye movement.Gray block: smooth pursuit period.
RMSE between target trajectory and estimated eye movement trajectory in the smooth pursuit task by EEG-VET was computed for each pursuit trial according to the following equation where i represents time sampling point, total sampling point = 180: coordinate − Target coordinate i ) 2 total sampling point Two-way repeated measures analysis of variance (ANOVA) with four speeds (1, 5, 9, and 19 • /s) and two directions (left and right) as within factors was used to test the effects of speeds (1, 5, 9, and 19 • /s), directions (left vs. right) and the interaction effect.Analyses were conducted using SPSS (Version 22, IBM) with a significance level of .05.

Figure 3
Figure 3 illustrates the performance of the EEG-VET in one participant with Figure 2a and 2b showing predicted gaze positions (Blue) to the Left and Right target positions (Red) respectively in the smooth pursuit task.Across all 18 participants, the accuracy across all participants was 1.008 • ± 0.357 • of visual angle, whereas the precision was 2.347 • ± 0.580 • of visual angle.To give a concrete impression of the model goodness in the visual space, we show the group measures of accuracy and precision for each of the 16 target positions in dot tracking task (Fig.3) contrasting the repeated measure ANOVA was performed on the RMSE of the EEG-VET with Direction and Speed as within factors.Tracking speed had a significant effect on X RMSE (F(3, 51) = 29.408;p < .001;partial  2 = .634)and Y RMSE (F(3, 51) = 4.010; p = .012;partial  2 = .191).There was no significant effect of direction on the RMSE X (F(1, 17) = .215;p = .648;partial  2 = .013)and RMSE Y observed (F(1, 17) = .612;p = .445;partial  2 = .035),which indicates no difference in the performance of EEG-VET between pursuing left and right.

F
Performance of EEG-VET in estimating gaze position during the tracking of a moving target in the smooth pursuit task.(a) prediction error in the horizontal coordinate and (b) prediction error in vertical coordinate.* and ** indicate significant Speed effects on estimation error.Error bars are standard deviations (N=18).advantagesover widely used commercial eye-trackers.Most significant is that the use of the EEG-VET facilitates the tracking of eye movement during EEG studies without the need for any additional hardware.Beyond predicting eye movement, this system can provide ocular artifact-free EEG signal by summing non-ocular components, which is especially suitable for Brain Computer Interface applications as it provides eye movement information and ocular contaminationfree EEG signal simultaneously.Finally, because the system relies on information recovered from internal signals (i.e., EEG), it can be used to measure eye movements in contexts where the use of an eye-tracker would prove impractical or impossible such as during sleep or free viewing without the limitation of a screen.
in the vertical direction.The performance has been evaluated in a narrow context of horizontal smooth pursuit task.Future work may fall into four categories.The first is to further improve the efficiency and effectiveness of the entire EEG-VET process, starting from the design, evaluation criterion, and the optimal length of the calibration task to using of potentially even better algorithms.The second is to expand the work beyond tracking of moving target in the horizontal directions by recognizing and considering the unique properties of vertical versus horizontal eye movements in EEG-VET design.The third is to remove the constraint of head movement by introducing innovations of the EEG-VET take allow the computation to take head direction information into consideration.The fourth is to expand the performance evaluation into a more diverse range of application scenarios.In summary, we hope that the potential technical capability afforded by this new concept of VET will inspire new areas of investigation, such as neurophysiological investigation of natural reading, learning and memory during sleep, better devices for neural feedback control, and other clinical neuroscience problems.It is interesting to note that the recovered H Comps were more informative than V Comps.H Comps had higher correlation coefficients with X gaze position (r = .95± .04)than V Comps with Y gaze position (r = .84± .13).We suspect that the signals due to eye blinks may be mixed with the signals due to vertical eye movement for looking up and down can naturally involve the opening and closing of the eye lids.Because the V Comp is one of the independent variables in the linear regression model, the final model performance measures reported here should be considered conservative.Better performance can potentially be expected if blinks and vertical eye movements could be dissociated in the behavioral task of dot tracking.

Amp SRP_H and Amp SRP_V to predict X t (mm) Using both Amp SRP_H and Amp SRP_V to predict Y t (mm)
Abbreviations: #P, identifier number of participant; p, significant value; r, correlation coefficient; SRP_H, horizontal saccade-related potential; SRP_V, vertical saccade-related potential.estimated via