首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Learning to link visual contours   总被引:1,自引:0,他引:1  
Li W  Piëch V  Gilbert CD 《Neuron》2008,57(3):442-451
In complex visual scenes, linking related contour elements is important for object recognition. This process, thought to be stimulus driven and hard wired, has substrates in primary visual cortex (V1). Here, however, we find contour integration in V1 to depend strongly on perceptual learning and top-down influences that are specific to contour detection. In naive monkeys, the information about contours embedded in complex backgrounds is absent in V1 neuronal responses and is independent of the locus of spatial attention. Training animals to find embedded contours induces strong contour-related responses specific to the trained retinotopic region. These responses are most robust when animals perform the contour detection task but disappear under anesthesia. Our findings suggest that top-down influences dynamically adapt neural circuits according to specific perceptual tasks. This may serve as a general neuronal mechanism of perceptual learning and reflect top-down mediated changes in cortical states.  相似文献   

2.
The present study investigated implicit and explicit recognition processes of rapidly perceptually learned objects by means of steady-state visual evoked potentials (SSVEP). Participants were initially exposed to object pictures within an incidental learning task (living/non-living categorization). Subsequently, degraded versions of some of these learned pictures were presented together with degraded versions of unlearned pictures and participants had to judge, whether they recognized an object or not. During this test phase, stimuli were presented at 15 Hz eliciting an SSVEP at the same frequency. Source localizations of SSVEP effects revealed for implicit and explicit processes overlapping activations in orbito-frontal and temporal regions. Correlates of explicit object recognition were additionally found in the superior parietal lobe. These findings are discussed to reflect facilitation of object-specific processing areas within the temporal lobe by an orbito-frontal top-down signal as proposed by bi-directional accounts of object recognition.  相似文献   

3.
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.  相似文献   

4.
Inferior temporal (IT) cortex as the final stage of the ventral visual pathway is involved in visual object recognition. In our everyday life we need to recognize visual objects that are degraded by noise. Psychophysical studies have shown that the accuracy and speed of the object recognition decreases as the amount of visual noise increases. However, the neural representation of ambiguous visual objects and the underlying neural mechanisms of such changes in the behavior are not known. Here, by recording the neuronal spiking activity of macaque monkeys’ IT we explored the relationship between stimulus ambiguity and the IT neural activity. We found smaller amplitude, later onset, earlier offset and shorter duration of the response as visual ambiguity increased. All of these modulations were gradual and correlated with the level of stimulus ambiguity. We found that while category selectivity of IT neurons decreased with noise, it was preserved for a large extent of visual ambiguity. This noise tolerance for category selectivity in IT was lost at 60% noise level. Interestingly, while the response of the IT neurons to visual stimuli at 60% noise level was significantly larger than their baseline activity and full (100%) noise, it was not category selective anymore. The latter finding shows a neural representation that signals the presence of visual stimulus without signaling what it is. In general these findings, in the context of a drift diffusion model, explain the neural mechanisms of perceptual accuracy and speed changes in the process of recognizing ambiguous objects.  相似文献   

5.
Visual saliency is a fundamental yet hard to define property of objects or locations in the visual world. In a context where objects and their representations compete to dominate our perception, saliency can be thought of as the "juice" that makes objects win the race. It is often assumed that saliency is extracted and represented in an explicit saliency map, which serves to determine the location of spatial attention at any given time. It is then by drawing attention to a salient object that it can be recognized or categorized. I argue against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition. A number of visual processing tasks are clearly performed too fast for such a costly strategy to be employed. Rather, visual attention could simply act by biasing a saliency-based object recognition system. Under natural conditions of stimulation, saliency can be represented implicitly throughout the ventral visual pathway, independent of any explicit saliency map. At any given level, the most activated cells of the neural population simply represent the most salient locations. The notion of saliency itself grows increasingly complex throughout the system, mostly based on luminance contrast until information reaches visual cortex, gradually incorporating information about features such as orientation or color in primary visual cortex and early extrastriate areas, and finally the identity and behavioral relevance of objects in temporal cortex and beyond. Under these conditions the object that dominates perception, i.e. the object yielding the strongest (or the first) selective neural response, is by definition the one whose features are most "salient"--without the need for any external saliency map. In addition, I suggest that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population. In accordance with our subjective experience that saliency and attention do not modify the appearance of objects, the feed-forward propagation of this first spike wave could serve to trigger saliency-based object recognition outside the realm of awareness, while conscious perceptions could be mediated by the remaining discharges of longer neuronal spike trains.  相似文献   

6.
A brain-damaged patient (D.F.) with visual form agnosia is described and discussed. D.F. has a profound inability to recognize objects, places and people, in large part because of her inability to make perceptual discriminations of size, shape or orientation, despite having good visual acuity. Yet she is able to perform skilled actions that depend on that very same size, shape and orientation information that is missing from her perceptual awareness. It is suggested that her intact vision can best be understood within the framework of a dual processing model, according to which there are two cortical processing streams operating on different coding principles, for perception and for action, respectively. These may be expected to have different degrees of dependence on top-down information. One possibility is that D.F.''s lack of explicit awareness of the visual cues that guide her behaviour may result from her having to rely on a processing system which is not knowledge-based in a broad sense. Conversely, it may be that the perceptual system can provide conscious awareness of its products in normal individuals by virtue of the fact that it does interact with a stored base of visual knowledge.  相似文献   

7.
Over successive stages, the ventral visual system of the primate brain develops neurons that respond selectively to particular objects or faces with translation, size and view invariance. The powerful neural representations found in Inferotemporal cortex form a remarkably rapid and robust basis for object recognition which belies the difficulties faced by the system when learning in natural visual environments. A central issue in understanding the process of biological object recognition is how these neurons learn to form separate representations of objects from complex visual scenes composed of multiple objects. We show how a one-layer competitive network comprised of ‘spiking’ neurons is able to learn separate transformation-invariant representations (exemplified by one-dimensional translations) of visual objects that are always seen together moving in lock-step, but separated in space. This is achieved by combining ‘Mexican hat’ functional lateral connectivity with cell firing-rate adaptation to temporally segment input representations of competing stimuli through anti-phase oscillations (perceptual cycles). These spiking dynamics are quickly and reliably generated, enabling selective modification of the feed-forward connections to neurons in the next layer through Spike-Time-Dependent Plasticity (STDP), resulting in separate translation-invariant representations of each stimulus. Variations in key properties of the model are investigated with respect to the network’s ability to develop appropriate input representations and subsequently output representations through STDP. Contrary to earlier rate-coded models of this learning process, this work shows how spiking neural networks may learn about more than one stimulus together without suffering from the ‘superposition catastrophe’. We take these results to suggest that spiking dynamics are key to understanding biological visual object recognition.  相似文献   

8.
Perception can change nonlinearly with stimulus contrast, and perceptual threshold may depend on the direction of contrast change. Such hysteresis effects in neurometric functions provide a signature of perceptual awareness. We recorded brain activity with functional neuroimaging in observers exposed to gradual contrast changes of initially hidden visual stimuli. Lateral occipital, frontal, and parietal regions all displayed both transient activations and hysteresis that correlated with change and maintenance of a percept, respectively. Medial temporal activity did not follow perception but increased during hysteresis and showed transient deactivations during perceptual transitions. These findings identify a set of brain regions sensitive to visual awareness and suggest that medial temporal structures may provide backward signals that account for neural and, thereby, perceptual hysteresis.  相似文献   

9.
Computational modelling of visual attention   总被引:3,自引:0,他引:3  
Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment. First, the perceptual saliency of stimuli critically depends on the surrounding context. Second, a unique 'saliency map' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy. Third, inhibition of return, the process by which the currently attended location is prevented from being attended again, is a crucial element of attentional deployment. Fourth, attention and eye movements tightly interplay, posing computational challenges with respect to the coordinate system used to control attention. And last, scene understanding and object recognition strongly constrain the selection of attended locations. Insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention.  相似文献   

10.
The visual cortex is not a passive recipient of information: predictions about incoming stimuli are made based on experience, partial information and the consequences of inferences. A combination of imaging studies in the human brain has now led to the proposal that the orbitofrontal cortex is a key source of top-down predictions leading to object recognition.  相似文献   

11.
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer''s discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.  相似文献   

12.
Visual scene recognition is a dynamic process through which incoming sensory information is iteratively compared with predictions regarding the most likely identity of the input stimulus. In this study, we used a novel progressive unfolding task to characterize the accumulation of perceptual evidence prior to scene recognition, and its potential modulation by the emotional valence of these scenes. Our results show that emotional (pleasant and unpleasant) scenes led to slower accumulation of evidence compared to neutral scenes. In addition, when controlling for the potential contribution of non-emotional factors (i.e., familiarity and complexity of the pictures), our results confirm a reliable shift in the accumulation of evidence for pleasant relative to neutral and unpleasant scenes, suggesting a valence-specific effect. These findings indicate that proactive iterations between sensory processing and top-down predictions during scene recognition are reliably influenced by the rapidly extracted (positive) emotional valence of the visual stimuli. We interpret these findings in accordance with the notion of a genuine positivity offset during emotional scene recognition.  相似文献   

13.
Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale.  相似文献   

14.
During the formation of new episodic memories, a rich array of perceptual information is bound together for long-term storage. However, the brain mechanisms by which sensory representations (such as colors, objects, or individuals) are selected for episodic encoding are currently unknown. We describe a functional magnetic resonance imaging experiment in which participants encoded the association between two classes of visual stimuli that elicit selective responses in the extrastriate visual cortex (faces and houses). Using connectivity analyses, we show that correlation in the hemodynamic signal between face- and place-sensitive voxels and the left dorsolateral prefrontal cortex is a reliable predictor of successful face-house binding. These data support the view that during episodic encoding, "top-down" control signals originating in the prefrontal cortex help determine which perceptual information is fated to be bound into the new episodic memory trace.  相似文献   

15.
Ganel T  Chajut E  Algom D 《Current biology : CB》2008,18(14):R599-R601
According to Weber's law, a basic perceptual principle of psychological science, sensitivity to changes along a given physical dimension decreases when stimulus intensity increases [1]. In other words, the ‘just noticeable difference’ (JND) for weaker stimuli is smaller — hence resolution power is greater — than that for stronger stimuli on the same sensory continuum. Although Weber's law characterizes human perception for virtually all sensory dimensions, including visual length [2] and [3], there have been no attempts to test its validity for visually guided action. For this purpose, we asked participants to either grasp or make perceptual size estimations for real objects varying in length. A striking dissociation was found between grasping and perceptual estimations: in the perceptual conditions, JND increased with physical size in accord with Weber's law; but in the grasping condition, JND was unaffected by the same variation in size of the referent objects. Therefore, Weber's law was violated for visually guided action, but not for perceptual estimations. These findings document a fundamental difference in the way that object size is computed for action and for perception and suggest that the visual coding for action is based on absolute metrics even at a very basic level of processing.  相似文献   

16.
17.
Can nonhuman animals attend to visual stimuli as whole, coherent objects? We investigated this question by adapting for use with pigeons a task in which human participants must report whether two visual attributes belong to the same object (one-object trial) or to different objects (two-object trial). We trained pigeons to discriminate a pair of differently colored shapes that had two targets either on a single object or on two different objects. Each target equally often appeared on the one-object and two-object stimuli; therefore, a specific target location could not serve as a discriminative cue. The pigeons learned to report whether the two target dots were located on a single object or on two different objects; follow-up tests demonstrated that this ability was not entirely based on memorization of the dot patterns and locations. Additional tests disclosed predominate stimulus control by the color, but not by the shape of the two objects. These findings suggest that human psychophysical methods are readily applicable to the study of object discrimination by nonhuman animals.  相似文献   

18.

Background

The ability to detect and integrate associations between unrelated items that are close in space and time is a key feature of human learning and memory. Learning sequential associations between non-adjacent visual stimuli (higher-order visuospatial dependencies) can occur either with or without awareness (explicit vs. implicit learning) of the products of learning. Existing behavioural and neurocognitive studies of explicit and implicit sequence learning, however, are based on conscious access to the sequence of target locations and, typically, on conditions where the locations for orienting, or motor, responses coincide with the locations of the target sequence.

Methodology/Principal Findings

Dichoptic stimuli were presented on a novel sequence learning task using a mirror stereoscope to mask the eye-of-origin of visual input from conscious awareness. We demonstrate that conscious access to the sequence of target locations and responses that coincide with structure of the target sequence are dispensable features when learning higher-order visuospatial associations. Sequence knowledge was expressed in the ability of participants to identify the trained higher-order visuospatial sequence on a recognition test, even though the trained and untrained recognition sequences were identical when viewed at a conscious binocular level, and differed only at the level of the masked sequential associations.

Conclusions/Significance

These results demonstrate that unconscious processing can support perceptual learning of higher-order sequential associations through interocular integration of retinotopic-based codes stemming from monocular eye-of-origin information. Furthermore, unlike other forms of perceptual associative learning, visuospatial attention did not need to be directed to the locations of the target sequence. More generally, the results pose a challenge to neural models of learning to account for a previously unknown capacity of the human visual system to support the detection, learning and recognition of higher-order sequential associations under conditions where observers are unable to see the target sequence or perform responses that coincide with structure of the target sequence.  相似文献   

19.
Neurons in the visual cortex are responsive to the presentation of oriented and curved line segments, which are thought to act as primitives for the visual processing of shapes and objects. Prolonged adaptation to such stimuli gives rise to two related perceptual effects: a slow change in the appearance of the adapting stimulus (perceptual drift), and the distortion of subsequently presented test stimuli (adaptational aftereffects). Here we used a psychophysical nulling technique to dissociate and quantify these two classical observations in order to examine their underlying mechanisms and their relationship to one another. In agreement with previous work, we found that during adaptation horizontal and vertical straight lines serve as attractors for perceived orientation and curvature. However, the rate of perceptual drift for different stimuli was not predictive of the corresponding aftereffect magnitudes, indicating that the two perceptual effects are governed by distinct neural processes. Finally, the rate of perceptual drift for curved line segments did not depend on the spatial scale of the stimulus, suggesting that its mechanisms lie outside strictly retinotopic processing stages. These findings provide new evidence that the visual system relies on statistically salient intrinsic reference stimuli for the processing of visual patterns, and point to perceptual drift as an experimental window for studying the mechanisms of visual perception.  相似文献   

20.
BACKGROUND: When we view static scenes that imply motion - such as an object dropping off a shelf - recognition memory for the position of the object is extrapolated forward. It is as if the object in our mind's eye comes alive and continues on its course. This phenomenon is known as representational momentum and results in a distortion of recognition memory in the implied direction of motion. Representational momentum is modifiable; simply labelling a drawing of a pointed object as 'rocket' will facilitate the effect, whereas the label 'steeple' will impede it. We used functional magnetic resonance imaging (fMRI) to explore the neural substrate for representational momentum. RESULTS: Subjects participated in two experiments. In the first, they were presented with video excerpts of objects in motion (versus the same objects in a resting position). This identified brain areas responsible for motion perception. In the second experiment, they were presented with still photographs of the same target items, only some of which implied motion (representational momentum stimuli). When viewing still photographs of scenes implying motion, activity was revealed in secondary visual cortical regions that overlap with areas responsible for the perception of actual motion. Additional bilateral activity was revealed within a posterior satellite of V5 for the representational momentum stimuli. Activation was also engendered in the anterior cingulate cortex. CONCLUSIONS: Considering the implicit nature of representational momentum and its modifiability, the findings suggest that higher-order semantic information can act on secondary visual cortex to alter perception without explicit awareness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号