Inferring visual space from ultra-fine extra-retinal knowledge of gaze position

Zhetuo Zhao, Ehud Ahissar, Jonathan D Victor, Michele Rucci*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

It has long been debated how humans resolve fine details and perceive a stable visual world despite the incessant fixational motion of their eyes. Current theories assume these processes to rely solely on the visual input to the retina, without contributions from motor and/or proprioceptive sources. Here we show that contrary to this widespread assumption, the visual system has access to high-resolution extra-retinal knowledge of fixational eye motion and uses it to deduce spatial relations. Building on recent advances in gaze-contingent display control, we created a spatial discrimination task in which the stimulus configuration was entirely determined by oculomotor activity. Our results show that humans correctly infer geometrical relations in the absence of spatial information on the retina and accurately combine high-resolution extraretinal monitoring of gaze displacement with retinal signals. These findings reveal a sensory-motor strategy for encoding space, in which fine oculomotor knowledge is used to interpret the fixational input to the retina.
Original languageEnglish
Article number269
Number of pages12
JournalNature Communications
Volume14
DOIs
Publication statusPublished - 17 Jan 2023

Bibliographical note

This work was supported by the National Institutes of Health grants EY18363 (M.R.) and EY07977 (J.V.). We thank Claudia Cherici and David Richters for their help in preliminary experiments and Janis Intoy and Martina Poletti for helpful comments and discussions.

Publisher Copyright:
© 2023, The Author(s).

Fingerprint

Dive into the research topics of 'Inferring visual space from ultra-fine extra-retinal knowledge of gaze position'. Together they form a unique fingerprint.

Cite this