首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Neurons in sensory systems can represent information not only by their firing rate, but also by the precise timing of individual spikes. For example, certain retinal ganglion cells, first identified in the salamander, encode the spatial structure of a new image by their first-spike latencies. Here we explore how this temporal code can be used by downstream neural circuits for computing complex features of the image that are not available from the signals of individual ganglion cells. To this end, we feed the experimentally observed spike trains from a population of retinal ganglion cells to an integrate-and-fire model of post-synaptic integration. The synaptic weights of this integration are tuned according to the recently introduced tempotron learning rule. We find that this model neuron can perform complex visual detection tasks in a single synaptic stage that would require multiple stages for neurons operating instead on neural spike counts. Furthermore, the model computes rapidly, using only a single spike per afferent, and can signal its decision in turn by just a single spike. Extending these analyses to large ensembles of simulated retinal signals, we show that the model can detect the orientation of a visual pattern independent of its phase, an operation thought to be one of the primitives in early visual processing. We analyze how these computations work and compare the performance of this model to other schemes for reading out spike-timing information. These results demonstrate that the retina formats spatial information into temporal spike sequences in a way that favors computation in the time domain. Moreover, complex image analysis can be achieved already by a simple integrate-and-fire model neuron, emphasizing the power and plausibility of rapid neural computing with spike times.  相似文献   

2.
Changing the relative phase of the frequency components of a stimulus usually also produces local contrast variations. Using stimuli composed of the product of a sinusoid (carrier) and a spatial envelope, an attempt was made to distinguish between the visual system's ability to code spatial phase on the one hand and local contrast and position cues on the other. The experiments assess the ability of observers to detect which of two stimuli is farther to the left. In the main experiments a large, easily detectable, envelope shift is presented on every trial and performance is measured as a function of the size of a carrier shift in the same direction. Increasing the size of the carrier shift gradually increases the size of the phase difference between the two stimuli in a trial but simultaneously reduces the contrast change in the bars of the stimulus. If the visual system can code phase directly the ability of observers to detect a change in location should improve as the size of the carrier shift increases but if local contrast is coded performance should be poorer over a small range of carrier shifts than that obtained without a carrier shift. It is shown that a region of poorer performance is obtained and therefore it is concluded that the visual system does not code spatial phase explicitly.  相似文献   

3.
《Journal of Physiology》2013,107(5):338-348
Ganglion cells in the vertebrate retina integrate visual information over their receptive fields. They do so by pooling presynaptic excitatory inputs from typically many bipolar cells, which themselves collect inputs from several photoreceptors. In addition, inhibitory interactions mediated by horizontal cells and amacrine cells modulate the structure of the receptive field. In many models, this spatial integration is assumed to occur in a linear fashion. Yet, it has long been known that spatial integration by retinal ganglion cells also incurs nonlinear phenomena. Moreover, several recent examples have shown that nonlinear spatial integration is tightly connected to specific visual functions performed by different types of retinal ganglion cells. This work discusses these advances in understanding the role of nonlinear spatial integration and reviews recent efforts to quantitatively study the nature and mechanisms underlying spatial nonlinearities. These new insights point towards a critical role of nonlinearities within ganglion cell receptive fields for capturing responses of the cells to natural and behaviorally relevant visual stimuli. In the long run, nonlinear phenomena of spatial integration may also prove important for implementing the actual neural code of retinal neurons when designing visual prostheses for the eye.  相似文献   

4.
Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar’s position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina’s population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar’s position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.  相似文献   

5.
Lesion to the posterior parietal cortex in monkeys and humans produces spatial deficits in movement and perception. In recording experiments from area 7a, a cortical subdivision in the posterior parietal cortex in monkeys, we have found neurons whose responses are a function of both the retinal location of visual stimuli and the position of the eyes in the orbits. By combining these signals area 7 a neurons code the location of visual stimuli with respect to the head. However, these cells respond over only limited ranges of eye positions (eye-position-dependent coding). To code location in craniotopic space at all eye positions (eye-position-independent coding) an additional step in neural processing is required that uses information distributed across populations of area 7a neurons. We describe here a neural network model, based on back-propagation learning, that both demonstrates how spatial location could be derived from the population response of area 7a neurons and accurately accounts for the observed response properties of these neurons.  相似文献   

6.
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.  相似文献   

7.
Visual motion contains a wealth of information about self-motion as well as the three-dimensional structure of the environment. Therefore, it is of utmost importance for any organism with eyes. However, visual motion information is not explicitly represented at the photoreceptor level, but rather has to be computed by the nervous system from the changing retinal images as one of the first processing steps. Two prominent models have been proposed to account for this neural computation: the Reichardt detector and the gradient detector. While the Reichardt detector correlates the luminance levels derived from two adjacent image points, the gradient detector provides an estimate of the local retinal image velocity by dividing the spatial and the temporal luminance gradient. As a consequence of their different internal processing structure, both the models differ in a number of functional aspects such as their dependence on the spatial-pattern structure as well as their sensitivity to photon noise. These different properties lead to the proposal that an ideal motion detector should be of Reichardt type at low luminance levels, but of gradient type at high luminance levels. However, experiments on the fly visual systems provided unambiguous evidence in favour of the Reichardt detector under all luminance conditions. Does this mean that the fly nervous system uses suboptimal computations, or is there a functional aspect missing in the optimality criterion? In the following, I will argue in favour of the latter, showing that Reichardt detectors have an automatic gain control allowing them to dynamically adjust their input–output relationships to the statistical range of velocities presented, while gradient detectors do not have this property. As a consequence, Reichardt detectors, but not gradient detectors, always provide a maximum amount of information about stimulus velocity over a large range of velocities. This important property might explain why Reichardt type of computations have been demonstrated to underlie the extraction of motion information in the fly visual system under all luminance levels.  相似文献   

8.
Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed?a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals.  相似文献   

9.
In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.  相似文献   

10.
Sensing is often implicitly assumed to be the passive acquisition of information. However, part of the sensory information is generated actively when animals move. For instance, humans shift their gaze actively in a sequence of saccades towards interesting locations in a scene. Likewise, many insects shift their gaze by saccadic turns of body and head, keeping their gaze fixed between saccades. Here we employ a novel panoramic virtual reality stimulator and show that motion computation in a blowfly visual interneuron is tuned to make efficient use of the characteristic dynamics of retinal image flow. The neuron is able to extract information about the spatial layout of the environment by utilizing intervals of stable vision resulting from the saccadic viewing strategy. The extraction is possible because the retinal image flow evoked by translation, containing information about object distances, is confined to low frequencies. This flow component can be derived from the total optic flow between saccades because the residual intersaccadic head rotations are small and encoded at higher frequencies. Information about the spatial layout of the environment can thus be extracted by the neuron in a computationally parsimonious way. These results on neuronal function based on naturalistic, behaviourally generated optic flow are in stark contrast to conclusions based on conventional visual stimuli that the neuron primarily represents a detector for yaw rotations of the animal.  相似文献   

11.
Certain experiments on the detection of low-contrast gratings, occasionally cited as evidence of Fourier analysis within the visual system, are interpreted without the assumption of Fourier analysis. Theoretical curves are obtained and compared with the published experimental points, showing mostly satisfactory agreement. The computations utilize Gaussian receptive fields (on-center and off-center) for the retinal ganglion cells, spatial summation, center-surround antagonism, quasilinear response at low contrasts (X-cells), and the assumption that the first significant convergence is primarily between cells of like response type and like receptive field geometry.  相似文献   

12.
The dual reciprocal and antagonistic organization of B- and D-neurons of the afferent visual system is obtained using differentiation and integration as mathematical equivalents of visual information processing by an impulse frequency code. The spatial and temporal derivatives lead to the transient responses. A constant and a time-dependent term proportional to the luminance distribution describe the sustained response components and the shift-effect of retinal on- and off-center ganglion cells. Receptive field properties of lateral geniculate cells and their antagonistic shift-effect are obtained by passing the retinal output, i.e. the difference between B- and D-neurons' activity, once again through the same operations. However, the factor of proportionality is applied to the retina alone. The surprisingly small difference between retinal and geniculate receptive field properties on the one hand and the dramatic change from a synergistic to an antagonistic shift-effect on the other hand are thereby explained. The theory offers an understanding of a a possible functional significance of the shift-effect as a mechanism of transientrestoration of visual information, which prevents the system from total fading by means of shifts of the retinal image, normally produced by eye movements.  相似文献   

13.
Scene content selected by active vision   总被引:5,自引:0,他引:5  
The primate visual system actively selects visual information from the environment for detailed processing through mechanisms of visual attention and saccadic eye movements. This study examines the statistical properties of the scene content selected by active vision. Eye movements were recorded while participants free-viewed digitized images of natural and artificial scenes. Fixation locations were determined for each image and image patches were extracted around the observed fixation locations. Measures of local contrast, local spatial correlation and spatial frequency content were calculated on the extracted image patches. Replicating previous results, local contrast was found to be greater at the points of fixation when compared to either the contrast for image patches extracted at random locations or at the observed fixation locations using an image-shuffled database. Contrary to some results and in agreement with other results in the literature, a significant decorrelation of image intensity is observed between the locations of fixation and other neighboring locations. A discussion and analysis of methodological techniques is given that provides an explanation for the discrepancy in results. The results of our analyses indicate that both the local contrast and correlation at the points of fixation are a function of image type and, furthermore, that the magnitude of these effects depend on the levels of contrast and correlation present overall in the images. Finally, the largest effect sizes in local contrast and correlation are found at distances of approximately 1 deg of visual angle, which agrees well with measures of optimal spatial scale selectivity in the visual periphery where visual information for potential saccade targets is processed.  相似文献   

14.
In the mammalian visual system, retinal axons undergo temporal and spatial rearrangements as they project bilaterally to targets on the brain. Retinal axons cross the neuraxis to form the optic chiasm on the hypothalamus in a position defined by overlapping domains of regulatory gene expression. However, the downstream molecules that direct these processes remain largely unknown. Here we use a novel in vitro paradigm to study possible roles of the Eph family of receptor tyrosine kinases in chiasm formation. In vivo, Eph receptors and their ligands distribute in complex patterns in the retina and hypothalamus. In vitro, retinal axons are inhibited by reaggregates of isolated hypothalamic, but not dorsal diencephalic or cerebellar cells. Furthermore, temporal retinal neurites are more inhibited than nasal neurites by hypothalamic cells. Addition of soluble EphA5-Fc to block Eph "A" subclass interactions decreases both the inhibition and the differential response of retinal neurites by hypothalamic reaggregates. These data show that isolated hypothalamic cells elicit specific, position-dependent inhibitory responses from retinal neurites in culture. Moreover, these responses are mediated, in part, by Eph interactions. Together with the in vivo distributions, these data suggest possible roles for Eph family members in directing retinal axon growth and/or reorganization during optic chiasm formation.  相似文献   

15.
Shi Z  Nijhawan R 《PloS one》2012,7(3):e33651
Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea.  相似文献   

16.
A mathematical model of the primary visual cortex is presented. Basically, the model comprises two features. Firstly, in analogy with the principle of the computerized tomography (CT), it assumes that simple cells in each hypercolumn are not merely detecting line segments in images as features, but rather that they are as a whole representing the local image with a certain representation. Secondly, it assumes that each hypercolumn is performing spatial frequency analyses of local images using that representation, and that the resultant spectra are represented by complex cells. The model is analyzed using numerical simulations and its advantages are discussed from the viewpoint of visual information processing. It is shown that 1) the proposed processing is tolerant to shifts in position of input images, and that 2) spatial frequency filtering operations can be easily performed in the model.  相似文献   

17.
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.  相似文献   

18.
Synchronized firing in neural populations has been proposed to constitute an elementary aspect of the neural code, but a complete understanding of its origins and significance has been elusive. Synchronized firing has been extensively documented in retinal ganglion cells, the output neurons of the retina. However, differences in synchronized firing across species and cell types have led to varied conclusions about its mechanisms and role in visual signaling. Recent work on two identified cell populations in the primate retina, the ON-parasol and OFF-parasol cells, permits a more unified understanding. Intracellular recordings reveal that synchronized firing in these cell types arises primarily from common synaptic input to adjacent pairs of cells. Statistical analysis indicates that local pairwise interactions can explain the pattern of synchronized firing in the entire parasol cell population. Computational analysis reveals that the aggregate impact of synchronized firing on the visual signal is substantial. Thus, in the parasol cells, the origin and impact of synchronized firing on the neural code may be understood as locally shared input which influences the visual signals transmitted from eye to brain.  相似文献   

19.
Carriers of blue cone monochromacy have fewer cone photoreceptors than normal. Here we examine how this disruption at the level of the retina affects visual function and cortical organization in these individuals. Visual resolution and contrast sensitivity was measured at the preferred retinal locus of fixation and visual resolution was tested at two eccentric locations (2.5° and 8°) with spectacle correction only. Adaptive optics corrected resolution acuity and cone spacing were simultaneously measured at several locations within the central fovea with adaptive optics scanning laser ophthalmoscopy (AOSLO). Fixation stability was assessed by extracting eye motion data from AOSLO videos. Retinotopic mapping using fMRI was carried out to estimate the area of early cortical regions, including that of the foveal confluence. Without adaptive optics correction, BCM carriers appeared to have normal visual function, with normal contrast sensitivity and visual resolution, but with AO-correction, visual resolution was significantly worse than normal. This resolution deficit is not explained by cone loss alone and is suggestive of an associated loss of retinal ganglion cells. However, despite evidence suggesting a reduction in the number of retinal ganglion cells, retinotopic mapping showed no reduction in the cortical area of the foveal confluence. These results suggest that ganglion cell density may not govern the foveal overrepresentation in the cortex. We propose that it is not the number of afferents, but rather the content of the information relayed to the cortex from the retina across the visual field that governs cortical magnification, as under normal viewing conditions this information is similar in both BCM carriers and normal controls.  相似文献   

20.
Animals are able to update their knowledge about their current position solely by integrating the speed and the direction of their movement, which is known as path integration. Recent discoveries suggest that grid cells in the medial entorhinal cortex might perform some of the essential underlying computations of path integration. However, a major concern over path integration is that as the measurement of speed and direction is inaccurate, the representation of the position will become increasingly unreliable. In this paper, we study how allothetic inputs can be used to continually correct the accumulating error in the path integrator system. We set up the model of a mobile agent equipped with the entorhinal representation of idiothetic (grid cell) and allothetic (visual cells) information and simulated its place learning in a virtual environment. Due to competitive learning, a robust hippocampal place code emerges rapidly in the model. At the same time, the hippocampo-entorhinal feed-back connections are modified via Hebbian learning in order to allow hippocampal place cells to influence the attractor dynamics in the entorhinal cortex. We show that the continuous feed-back from the integrated hippocampal place representation is able to stabilize the grid cell code. This research was supported by the EU Framework 6 ICEA project (IST-4-027819-IP).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号