• auditory system;
  • gaze control;
  • models;
  • saccadic system;
  • spatial behaviour


Orienting the eyes towards a peripheral sound source calls for a transformation of the head-centred sound coordinates into an oculocentric motor command, which requires an estimate of current eye position. Current models of saccadic control explain spatial accuracy by oculocentric transformations that rely on efference copies of relative eye-displacement signals, rather than on absolute eye position in the orbit. In principle, the gaze-control system could keep track of instantaneous eye position by vector addition of intervening eye-displacement commands. However, given that each motor update is endowed with some noise, the neural estimate of eye orientation is then expected to become noisier with increasing number of intervening saccades. As a consequence, the localization response will also be noisier. According to the alternative, in which target updates rely on feedback of the current eye position, such an increase in errors would be absent. In an attempt to dissociate these hypotheses, we studied the influence of the accumulation of oculomotor commands prior to a sound-localization response. Head-restrained subjects generated voluntary eye movements in darkness in random directions for a period between 0.2 and 15 s, after which they rapidly reoriented the eyes towards a brief sound burst. The results demonstrate that the audiomotor system programmes the orienting response on the basis of actual eye position, rather than on an accumulated estimate from intervening eye displacements.