Abstract: | Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations. |