首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Neurons in the primary visual cortex, V1, are specialized for the processing of elemental features of the visual stimulus, such as orientation and spatial frequency. Recent fMRI evidence suggest that V1 neurons are also recruited in visual perceptual memory; a number of studies using multi-voxel pattern analysis have successfully decoded stimulus-specific information from V1 activity patterns during the delay phase in memory tasks. However, consistent fMRI signal modulations reflecting the memory process have not yet been demonstrated. Here, we report evidence, from three subjects, that the low V1 BOLD activity during retention of low-level visual features is caused by competing interactions between neural populations coding for different values along the spectrum of the dimension remembered. We applied a memory masking paradigm in which the memory representation of a masker stimulus interferes with a delayed spatial frequency discrimination task when its frequency differs from the discriminanda with ±1 octave and found that impaired behavioral performance due to masking is reflected in weaker V1 BOLD signals. This cross-channel inhibition in V1 only occurs with retinotopic overlap between the masker and the sample stimulus of the discrimination task. The results suggest that memory for spatial frequency is a local process in the retinotopically organized visual cortex.  相似文献   

2.
Ito M  Gilbert CD 《Neuron》1999,22(3):593-604
The response properties of cells in the primary visual cortex (V1) were measured while the animals directed their attention either to the position of the neuron's receptive field (RF), to a position away from the RF (focal attention), or to four locations in the visual field (distributed attention). Over the population, varying attentional state had no significant effect on the response to an isolated stimulus within the RF but had a large influence on the facilitatory effects of contextual lines. We propose that the attentional modulation of contextual effects represents a gating of long range horizontal connections within area V1 by feedback connections to V1 and that this gating provides a mechanism for shaping responses under attention to stimulus configuration.  相似文献   

3.
An important requirement for vision is to identify interesting and relevant regions of the environment for further processing. Some models assume that salient locations from a visual scene are encoded in a dedicated spatial saliency map [1, 2]. Then, a winner-take-all (WTA) mechanism [1, 2] is often believed to threshold the graded saliency representation and identify the most salient position in the visual field. Here we aimed to assess whether neural representations of graded saliency and the subsequent WTA mechanism can be dissociated. We presented images of natural scenes while subjects were in a scanner performing a demanding fixation task, and thus their attention was directed away. Signals in early visual cortex and posterior intraparietal sulcus (IPS) correlated with graded saliency as defined by a computational saliency model. Multivariate pattern classification [3, 4] revealed that the most salient position in the visual field was encoded in anterior IPS and frontal eye fields (FEF), thus reflecting a potential WTA stage. Our results thus confirm that graded saliency and WTA-thresholded saliency are encoded in distinct neural structures. This could provide the neural representation required for rapid and automatic orientation toward salient events in natural environments.  相似文献   

4.
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.  相似文献   

5.
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling.  相似文献   

6.
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.  相似文献   

7.
Gregoriou GG  Gotts SJ  Desimone R 《Neuron》2012,73(3):581-594
Shifts of gaze and shifts of attention are closely linked and it is debated whether they result from the same neural mechanisms. Both processes involve the frontal eye fields (FEF), an area which is also a source of top-down feedback to area V4 during covert attention. To test the relative contributions of oculomotor and attention-related FEF signals to such feedback, we recorded simultaneously from both areas in a covert attention task and in a saccade task. In the attention task, only visual and visuomovement FEF neurons showed enhanced responses, whereas movement cells were unchanged. Importantly, visual, but not movement or visuomovement cells, showed enhanced gamma frequency synchronization with activity in V4 during attention. Within FEF, beta synchronization was increased for movement cells during attention but was suppressed in the saccade task. These findings support the idea that the attentional modulation of visual processing is not mediated by movement neurons.  相似文献   

8.
Zhou H  Desimone R 《Neuron》2011,70(6):1205-1217
When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target.  相似文献   

9.
Functional magnetic resonance imaging (fMRI) was used while normal human volunteers engaged in simple detection and discrimination tasks, revealing separable modulations of early visual cortex associated with spatial attention and task structure. Both modulations occur even when there is no change in sensory stimulation. The modulation due to spatial attention is present throughout the early visual areas V1, V2, V3, and VP, and varies with the attended location. The task structure activations are strongest in V1 and are greater in regions that represent more peripheral parts of the visual field. Control experiments demonstrate that the task structure activations cannot be attributed to visual, auditory, or somatosensory processing, the motor response for the detection/discrimination judgment, or oculomotor responses such as blinks or saccades. These findings demonstrate that early visual areas are modulated by at least two types of endogenous signals, each with distinct cortical distributions.  相似文献   

10.
David SV  Hayden BY  Mazer JA  Gallant JL 《Neuron》2008,59(3):509-521
Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.  相似文献   

11.
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.  相似文献   

12.
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1-V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings.  相似文献   

13.
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.  相似文献   

14.
Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception whenever one perceptual interpretation is dominant, and the instability of perception that causes perceptual dominance to alternate between perceptual interpretations upon extended viewing. This review summarizes several ways in which contextual information can help the brain resolve visual ambiguities and construct temporarily stable perceptual experiences. Temporal context through prior stimulation or internal brain states brought about by feedback from higher cortical processing levels may alter the response characteristics of specific neurons involved in rivalry resolution. Furthermore, spatial or crossmodal context may strengthen the neuronal representation of one of the possible perceptual interpretations and consequently bias the rivalry process towards it. We suggest that contextual influences on perceptual choices with ambiguous visual stimuli can be highly informative about the neuronal mechanisms of context-driven inference in the general processes of perceptual decision-making.  相似文献   

15.
Hayden BY  Gallant JL 《Neuron》2005,47(5):637-643
Attention can facilitate visual processing, emphasizing specific locations and highlighting stimuli containing specific features. To dissociate the mechanisms of spatial and feature-based attention, we compared the time course of visually evoked responses under different attention conditions. We recorded from single neurons in area V4 during a delayed match-to-sample task that controlled both spatial and feature-based attention. Neuronal responses increased when spatial attention was directed toward the receptive field and were modulated by the identity of the target of feature-based attention. Modulation by spatial attention was weaker during the early portion of the visual response and stronger during the later portion of the response. In contrast, modulation by feature-based attention was relatively constant throughout the response. It appears that stimulus onset transients disrupt spatial attention, but not feature attention. We conclude that spatial attention reflects a combination of stimulus-driven and goal-driven processes, while feature-based attention is purely goal driven.  相似文献   

16.
Stimulus expectation can modulate neural responses in early sensory cortical regions, with expected stimuli often leading to a reduced neural response. However, it is unclear whether this expectation suppression is an automatic phenomenon or is instead dependent on the type of task a subject is engaged in. To investigate this, human subjects were presented with visual grating stimuli in the periphery that were either predictable or non-predictable while they performed three tasks that differently engaged cognitive resources. In two of the tasks, the predictable stimulus was task-irrelevant and spatial attention was engaged at fixation, with a high load on either perceptual or working memory resources. In the third task, the predictable stimulus was task-relevant, and therefore spatially attended. We observed that expectation suppression is dependent on the cognitive resources engaged by a subjects’ current task. When the grating was task-irrelevant, expectation suppression for predictable items was visible in retinotopically specific areas of early visual cortex (V1-V3) during the perceptual task, but it was abolished when working memory was loaded. When the grating was task-relevant and spatially attended, there was no significant effect of expectation in early visual cortex. These results suggest that expectation suppression is not an automatic phenomenon, but dependent on attentional state and type of available cognitive resources.  相似文献   

17.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

18.
Li W  Piëch V  Gilbert CD 《Neuron》2006,50(6):951-962
Contour integration is an important intermediate stage of object recognition, in which line segments belonging to an object boundary are perceptually linked and segmented from complex backgrounds. Contextual influences observed in primary visual cortex (V1) suggest the involvement of V1 in contour integration. Here, we provide direct evidence that, in monkeys performing a contour detection task, there was a close correlation between the responses of V1 neurons and the perceptual saliency of contours. Receiver operating characteristic analysis showed that single neuronal responses encode the presence or absence of a contour as reliably as the animal's behavioral responses. We also show that the same visual contours elicited significantly weaker neuronal responses when they were not detected in the detection task, or when they were unattended. Our results demonstrate that contextual interactions in V1 play a pivotal role in contour integration and saliency.  相似文献   

19.

Background

Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces.

Methodology/Principal Findings

In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches.

Conclusions/Significance

A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals which treat localized junction configurations as 2D image features, we link them to mechanisms of apparent surface segregation. As a consequence, we demonstrate how junctions can change their perceptual representation depending on the scene context and the spatial configuration of boundary fragments.  相似文献   

20.
While vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号