首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 336 毫秒
1.
Are objects coded by a small number of neurons or cortical regions that respond preferentially to the object in question, or by more distributed patterns of responses, including neurons or regions that respond only weakly? Distributed codes can represent a larger number of alternative items than sparse codes but produce ambiguities when multiple items are represented simultaneously (the "superposition" problem). Recent studies found category information in the distributed pattern of response across the ventral visual pathway, including in regions that do not "prefer" the object in question. However, these studies measured neural responses to isolated objects, a situation atypical of real-world vision, where multiple objects are usually present simultaneously ("clutter"). We report that information in the spatial pattern of fMRI response about standard object categories is severely disrupted by clutter and eliminated when attention is diverted. However, information about preferred categories in category-specific regions is undiminished by clutter and partly preserved under diverted attention. These findings indicate that in natural conditions, the pattern of fMRI response provides robust category information only for objects coded in selective cortical regions and highlight the vulnerability of distributed representations to clutter and the advantages of sparse cortical codes in mitigating clutter costs.  相似文献   

2.
This study has begun to test the hypothesis that aspects of hand/object shape are represented in the discharge of primary motor cortex (M1) neurons. Two monkeys were trained in a visually cued reach-to-grasp task, in which object properties and grasp forces were systematically varied. Behavioral analyses show that the reach and grasp force production were constant across the objects. The discharge of M1 neurons was highly modulated during the reach and grasp. Multiple linear regressions models revealed that the M1 discharge was highly dependent on the object grasped, with object class, volume, orientation and grasp force as significant predictors. These findings are interpreted as evidence that the CNS controls the hand as a unit.  相似文献   

3.
Regularities are gradually represented in cortex after extensive experience [1], and yet they can influence behavior after minimal exposure [2, 3]. What kind of representations support such rapid statistical learning? The medial temporal lobe (MTL) can represent information from even a single experience [4], making it a good candidate system for assisting in initial learning about regularities. We combined anatomical segmentation of the MTL, high-resolution fMRI, and multivariate pattern analysis to identify representations of objects in cortical and hippocampal areas of human MTL, assessing how these representations were shaped by exposure to regularities. Subjects viewed a continuous visual stream containing hidden temporal relationships-pairs of objects that reliably appeared nearby in time. We compared the pattern of blood oxygen level-dependent activity evoked by each object before and after this exposure, and found that perirhinal cortex, parahippocampal cortex, subiculum, CA1, and CA2/CA3/dentate gyrus (CA2/3/DG) encoded regularities by increasing the representational similarity of their constituent objects. Most regions exhibited bidirectional associative shaping, whereas CA2/3/DG represented regularities in a forward-looking predictive manner. These findings suggest that object representations in MTL come to mirror the temporal structure of the environment, supporting rapid and incidental statistical learning.  相似文献   

4.
Distributed coding of sound locations in the auditory cortex   总被引:3,自引:0,他引:3  
Although the auditory cortex plays an important role in sound localization, that role is not well understood. In this paper, we examine the nature of spatial representation within the auditory cortex, focusing on three questions. First, are sound-source locations encoded by individual sharply tuned neurons or by activity distributed across larger neuronal populations? Second, do temporal features of neural responses carry information about sound-source location? Third, are any fields of the auditory cortex specialized for spatial processing? We present a brief review of recent work relevant to these questions along with the results of our investigations of spatial sensitivity in cat auditory cortex. Together, they strongly suggest that space is represented in a distributed manner, that response timing (notably first-spike latency) is a critical information-bearing feature of cortical responses, and that neurons in various cortical fields differ in both their degree of spatial sensitivity and their manner of spatial coding. The posterior auditory field (PAF), in particular, is well suited for the distributed coding of space and encodes sound-source locations partly by modulations of response latency. Studies of neurons recorded simultaneously from PAF and/or A1 reveal that spatial information can be decoded from the relative spike times of pairs of neurons - particularly when responses are compared between the two fields - thus partially compensating for the absence of an absolute reference to stimulus onset.  相似文献   

5.
Visual saliency is a fundamental yet hard to define property of objects or locations in the visual world. In a context where objects and their representations compete to dominate our perception, saliency can be thought of as the "juice" that makes objects win the race. It is often assumed that saliency is extracted and represented in an explicit saliency map, which serves to determine the location of spatial attention at any given time. It is then by drawing attention to a salient object that it can be recognized or categorized. I argue against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition. A number of visual processing tasks are clearly performed too fast for such a costly strategy to be employed. Rather, visual attention could simply act by biasing a saliency-based object recognition system. Under natural conditions of stimulation, saliency can be represented implicitly throughout the ventral visual pathway, independent of any explicit saliency map. At any given level, the most activated cells of the neural population simply represent the most salient locations. The notion of saliency itself grows increasingly complex throughout the system, mostly based on luminance contrast until information reaches visual cortex, gradually incorporating information about features such as orientation or color in primary visual cortex and early extrastriate areas, and finally the identity and behavioral relevance of objects in temporal cortex and beyond. Under these conditions the object that dominates perception, i.e. the object yielding the strongest (or the first) selective neural response, is by definition the one whose features are most "salient"--without the need for any external saliency map. In addition, I suggest that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population. In accordance with our subjective experience that saliency and attention do not modify the appearance of objects, the feed-forward propagation of this first spike wave could serve to trigger saliency-based object recognition outside the realm of awareness, while conscious perceptions could be mediated by the remaining discharges of longer neuronal spike trains.  相似文献   

6.
Rainer G  Miller EK 《Neuron》2000,27(1):179-189
The perception and recognition of objects are improved by experience. Here, we show that monkeys' ability to recognize degraded objects was improved by several days of practice with these objects. This improvement was reflected in the activity of neurons in the prefrontal (PF) cortex, a brain region critical for a wide range of visual behaviors. Familiar objects activated fewer neurons than did novel objects, but these neurons were more narrowly tuned, and the object representation was more resistant to the effects of degradation, after experience. These results demonstrate a neural correlate of visual learning in the PF cortex of adult monkeys.  相似文献   

7.
Fang F  He S 《Neuron》2005,45(5):793-800
Are there neurons representing specific views of objects in the human visual system? A visual selective adaptation method was used to address this question. After visual adaptation to an object viewed either 15 or 30 degrees from one side, when the same object was subsequently presented near the frontal view, the perceived viewing directions were biased in a direction opposite to that of the adapted viewpoint. This aftereffect can be obtained with spatially nonoverlapping adapting and test stimuli, and it depends on the global representation of the adapting stimuli. Viewpoint aftereffects were found within, but not across, categories of objects tested (faces, cars, wire-like objects). The magnitude of this aftereffect depends on the angular difference between the adapting and test viewing angles and grows with increasing duration of adaptation. These results support the existence of object-selective neurons tuned to specific viewing angles in the human visual system.  相似文献   

8.
Brincat SL  Connor CE 《Neuron》2006,49(1):17-24
How does the brain synthesize low-level neural signals for simple shape parts into coherent representations of complete objects? Here, we present evidence for a dynamic process of object part integration in macaque posterior inferotemporal cortex (IT). Immediately after stimulus onset, neural responses carried information about individual object parts (simple contour fragments) only. Subsequently, information about specific multipart configurations emerged, building gradually over the course of approximately 60 ms, producing a sparser and more explicit representation of object shape. We show that this gradual transformation can be explained by a recurrent network process that effectively compares parts signals across neurons to generate inferences about multipart shape configurations.  相似文献   

9.
In our previous studies of hand manipulation task-related neurons, we found many neurons of the parietal association cortex which responded to the sight of three-dimensional (3D) objects. Most of the task-related neurons in the AIP area (the lateral bank of the anterior intraparietal sulcus) were visually responsive and half of them responded to objects for manipulation. Most of these neurons were selective for the 3D features of the objects. More recently, we have found binocular visual neurons in the lateral bank of the caudal intraparietal sulcus (c-IPS area) that preferentially respond to a luminous bar or place at a particular orientation in space. We studied the responses of axis-orientation selective (AOS) neurons and surface-orientation selective (SOS) neurons in this area with stimuli presented on a 3D computer graphics display. The AOS neurons showed a stronger response to elongated stimuli and showed tuning to the orientation of the longitudinal axis. Many of them preferred a tilted stimulus in depth and appeared to be sensitive to orientation disparity and/or width disparity. The SOS neurons showed a stronger response to a flat than to an elongated stimulus and showed tuning to the 3D orientation of the surface. Their responses increased with the width or length of the stimulus. A considerable number of SOS neurons responded to a square in a random dot stereogram and were tuned to orientation in depth, suggesting their sensitivity to the gradient of disparity. We also found several SOS neurons that responded to a square with tilted or slanted contours, suggesting their sensitivity to orientation disparity and/or width disparity. Area c-IPS is likely to send visual signals of the 3D features of an object to area AIP for the visual guidance of hand actions.  相似文献   

10.
How many neurons participate in the representation of a single visual image? Answering this question is critical for constraining biologically inspired models of object recognition, which vary greatly in their assumptions from few "grandmother cells" to numerous neurons in widely distributed networks. Functional imaging techniques, such as fMRI, provide an opportunity to explore this issue, since they allow the simultaneous detection of the entire neuronal population responding to each stimulus. Several studies have shown that fMRI BOLD signal is approximately proportional to neuronal activity. However, since it provides an indirect measure of this activity, obtaining a realistic estimate of the number of activated neurons requires several intervening steps. Here, we used the extensive knowledge of primate V1 to yield a conservative estimate of the ratio between hemodynamic response and neuronal firing. This ratio was then used, in addition to several cautious assumptions, to assess the number of neurons responding to a single-object image in the entire visual cortex and particularly in object-related areas. Our results show that at least a million neurons in object-related cortex and about two hundred million neurons in the entire visual cortex are involved in the representation of a single-object image.  相似文献   

11.
Hung CC  Carlson ET  Connor CE 《Neuron》2012,74(6):1099-1113
The basic, still unanswered question about visual object representation is this: what specific information is encoded by neural signals? Theorists have long predicted that neurons would encode medial axis or skeletal object shape, yet recent studies reveal instead neural coding of boundary or surface shape. Here, we addressed this theoretical/experimental disconnect, using adaptive shape sampling to demonstrate explicit coding of medial axis shape in high-level object cortex (macaque monkey inferotemporal cortex or IT). Our metric shape analyses revealed a coding continuum, along which most neurons represent a configuration of both medial axis and surface components. Thus, IT response functions embody a rich basis set for simultaneously representing skeletal and external shape of complex objects. This would be especially useful for representing biological shapes, which are often characterized by both complex, articulated skeletal structure and specific surface features.  相似文献   

12.
Haushofer J  Kanwisher N 《Neuron》2007,53(6):773-775
How does experience change representations of visual objects in the brain? Do cortical object representations reflect category membership? In this issue of Neuron, Jiang et al. show that category training leads to sharpening of neural responses in high-level visual cortex; in contrast, category boundaries may be represented only in prefrontal cortex.  相似文献   

13.
Ilg UJ  Schumann S  Thier P 《Neuron》2004,43(1):145-151
The motion areas of posterior parietal cortex extract information on visual motion for perception as well as for the guidance of movement. It is usually assumed that neurons in posterior parietal cortex represent visual motion relative to the retina. Current models describing action guided by moving objects work successfully based on this assumption. However, here we show that the pursuit-related responses of a distinct group of neurons in area MST of monkeys are at odds with this view. Rather than signaling object image motion on the retina, they represent object motion in world-centered coordinates. This representation may simplify the coordination of object-directed action and ego motion-invariant visual perception.  相似文献   

14.
An important step in visual processing is the segregation of objects in a visual scene from one another and from the embedding background. According to current theories of visual neuroscience, the different features of a particular object are represented by cells which are spatially distributed across multiple visual areas in the brain. The segregation of an object therefore requires the unique identification and integration of the pertaining cells which have to be “bound” into one assembly coding for the object in question. Several authors have suggested that such a binding of cells could be achieved by the selective synchronization of temporally structured responses of the neurons activated by features of the same stimulus. This concept has recently gained support by the observation of stimulus-dependent oscillatory activity in the visual system of the cat, pigeon and monkey. Furthermore, experimental evidence has been found for the formation and segregation of synchronously active cell assemblies representing different stimuli in the visual field. In this study, we investigate temporally structured activity in networks with single and multiple feature domains. As a first step, we examine the formation and segregation of cell assemblies by synchronizing and desynchronizing connections within a single feature module. We then demonstrate that distributed assemblies can be appropriately bound in a network comprising three modules selective for stimulus disparity, orientation and colour, respectively. In this context, we address the principal problem of segregating assemblies representing spatially overlapping stimuli in a distributed architecture. Using synchronizing as well as desynchronizing mechanisms, our simulations demonstrate that the binding problem can be solved by temporally correlated responses of cells which are distributed across multiple feature modules. Received: 25 March 1993/Accepted in revised form: 8 September 1993  相似文献   

15.
The part of the primate visual cortex responsible for the recognition of objects is parcelled into about a dozen areas organized somewhat hierarchically (the region is called the ventral stream). Why are there approximately this many hierarchical levels? Here I put forth a generic information-processing hierarchical model, and show how the total number of neurons required depends on the number of hierarchical levels and on the complexity of visual objects that must be recognized. Because the recognition of written words appears to occur in a similar part of inferotemporal cortex as other visual objects, the complexity of written words may be similar to that of other visual objects for humans; for this reason, I measure the complexity of written words, and use it as an approximate estimate of the complexity more generally of visual objects. I then show that the information-processing hierarchy that accommodates visual objects of that complexity possesses the minimum number of neurons when the number of hierarchical levels is approximately 15.  相似文献   

16.
The external world is mapped retinotopically onto the primary visual cortex (V1). We show here that objects in the world, unless they are very dissimilar, can be recognized only if they are sufficiently separated in visual cortex: specifically, in V1, at least 6mm apart in the radial direction (increasing eccentricity) or 1mm apart in the circumferential direction (equal eccentricity). Objects closer together than this critical spacing are perceived as an unidentifiable jumble. This is called 'crowding'. It severely limits visual processing, including speed of reading and searching. The conclusion about visual cortex rests on three findings. First, psychophysically, the necessary 'critical' spacing, in the visual field, is proportional to (roughly half) the eccentricity of the objects. Second, the critical spacing is independent of the size and kind of object. Third, anatomically, the representation of the visual field on the cortical surface is such that the position in V1 (and several other areas) is the logarithm of eccentricity in the visual field. Furthermore, we show that much of this can be accounted for by supposing that each 'combining field', defined by the critical spacing measurements, is implemented by a fixed number of cortical neurons.  相似文献   

17.
Position-and-scale-free representations of shapes are acquired by neurons in the inferior temporal (IT) cortex. So each neuron receives information from the whole visual field. Familiar shapes are extremely restricted from all the possible shapes on the whole visual field. So they must be clustered in the shape space to have mixed structure of continuity and discreteness. We demonstrate that multiple representation can be acquired in a spike-based model for topological maps based on the spike-timing-dependent synaptic plasticity (STDP), subjected to a set of inputs on multiple rings, which is a simple example of mixed structure. In this representation, the position on each ring is represented by a center of active neurons and the difference of rings is represented by a detailed pattern of active neurons. Neurons in the same region exhibit high activities for an input on the other ring. The result is consistent with the fact observed in IT cortex that neighboring neurons exhibit different preferences while the region of active neurons is continuously shifted for continuous changes of object.  相似文献   

18.
Inferior temporal (IT) cortex as the final stage of the ventral visual pathway is involved in visual object recognition. In our everyday life we need to recognize visual objects that are degraded by noise. Psychophysical studies have shown that the accuracy and speed of the object recognition decreases as the amount of visual noise increases. However, the neural representation of ambiguous visual objects and the underlying neural mechanisms of such changes in the behavior are not known. Here, by recording the neuronal spiking activity of macaque monkeys’ IT we explored the relationship between stimulus ambiguity and the IT neural activity. We found smaller amplitude, later onset, earlier offset and shorter duration of the response as visual ambiguity increased. All of these modulations were gradual and correlated with the level of stimulus ambiguity. We found that while category selectivity of IT neurons decreased with noise, it was preserved for a large extent of visual ambiguity. This noise tolerance for category selectivity in IT was lost at 60% noise level. Interestingly, while the response of the IT neurons to visual stimuli at 60% noise level was significantly larger than their baseline activity and full (100%) noise, it was not category selective anymore. The latter finding shows a neural representation that signals the presence of visual stimulus without signaling what it is. In general these findings, in the context of a drift diffusion model, explain the neural mechanisms of perceptual accuracy and speed changes in the process of recognizing ambiguous objects.  相似文献   

19.
In a typical auditory scene, sounds from different sources and reflective surfaces summate in the ears, causing spatial cues to fluctuate. Prevailing hypotheses of how spatial locations may be encoded and represented across auditory neurons generally disregard these fluctuations and must therefore invoke additional mechanisms for detecting and representing them. Here, we consider a different hypothesis in which spatial perception corresponds to an intermediate or sub-maximal firing probability across spatially selective neurons within each hemisphere. The precedence or Haas effect presents an ideal opportunity for examining this hypothesis, since the temporal superposition of an acoustical reflection with sounds arriving directly from a source can cause otherwise stable cues to fluctuate. Our findings suggest that subjects’ experiences may simply reflect the spatial cues that momentarily arise under various acoustical conditions and how these cues are represented. We further suggest that auditory objects may acquire “edges” under conditions when interaural time differences are broadly distributed.  相似文献   

20.
What is an auditory object?   总被引:4,自引:0,他引:4  
Objects are the building blocks of experience, but what do we mean by an object? Increasingly, neuroscientists refer to 'auditory objects', yet it is not clear what properties these should possess, how they might be represented in the brain, or how they might relate to the more familiar objects of vision. The concept of an auditory object challenges our understanding of object perception. Here, we offer a critical perspective on the concept and its basis in the brain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号