首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Fragment-based learning of visual object categories   总被引:2,自引:0,他引:2  
When we perceive a visual object, we implicitly or explicitly associate it with a category we know. It is known that the visual system can use local, informative image fragments of a given object, rather than the whole object, to classify it into a familiar category. How we acquire informative fragments has remained unclear. Here, we show that human observers acquire informative fragments during the initial learning of categories. We created new, but naturalistic, classes of visual objects by using a novel "virtual phylogenesis" (VP) algorithm that simulates key aspects of how biological categories evolve. Subjects were trained to distinguish two of these classes by using whole exemplar objects, not fragments. We hypothesized that if the visual system learns informative object fragments during category learning, then subjects must be able to perform the newly learned categorization by using only the fragments as opposed to whole objects. We found that subjects were able to successfully perform the classification task by using each of the informative fragments by itself, but not by using any of the comparable, but uninformative, fragments. Our results not only reveal that novel categories can be learned by discovering informative fragments but also introduce and illustrate the use of VP as a versatile tool for category-learning research.  相似文献   

2.
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.  相似文献   

3.
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition.  相似文献   

4.
Primary visual cortex (V1) was implicated as an important candidate for the site of perceptual suppression in numerous psychophysical and imaging studies. However, neurophysiological results in awake monkeys provided evidence for competition mainly between neurons in areas beyond V1. In particular, only a moderate percentage of neurons in V1 were found to modulate in parallel with perception with magnitude substantially smaller than the physical preference of these neurons. It is yet unclear whether these small modulations are rooted from local circuits in V1 or influenced by higher cognitive states. To address this question we recorded multi-unit spiking activity and local field potentials in area V1 of awake and anesthetized macaque monkeys during the paradigm of binocular flash suppression. We found that a small but significant modulation was present in both the anesthetized and awake states during the flash suppression presentation. Furthermore, the relative amplitudes of the perceptual modulations were not significantly different in the two states. We suggest that these early effects of perceptual suppression might occur locally in V1, in prior processing stages or within early visual cortical areas in the absence of top-down feedback from higher cognitive stages that are suppressed under anesthesia.  相似文献   

5.
In recent years, more and more laboratories have developed functional Magnetic Resonance Imaging (fMRI) for awake non-human primates. This research is essential to provide a link between non-invasive hemodynamic signals recorded in the human brain and the vast body of knowledge gained from invasive electrophysiological studies in monkeys. Given that their brain structure is so closely related to that of humans and that monkeys can be trained to perform complicated behavioral tasks, results obtained with monkey fMRI and electrophysiology can be compared to fMRI results obtained in humans, and provide information crucial to a better understanding of the mechanisms by which different cortical areas perform their functions in the human brain. However, despite that the first publications on fMRI in awake behaving macaques appeared ~10 years ago (Logothetis et al. (1999) [1], Stefanacci et al. (1998) [2], Dubowitz et al. (1998) [3]), relatively few laboratories perform such experiments routinely, a sign of the significant technical difficulties that must be overcome. The higher spatial resolution required because of the animal’s smaller brain results in poorer signal-to-noise ratios than in human fMRI, which is further compounded by problems due to animal motion. Here, we discuss the specific challenges and benefits of fMRI in the awake monkey and review the methodologies and strategies for scanning behaving macaques.  相似文献   

6.
Capuchin monkeys have provided uneven evidence of matching actions they observe others perform. In accord with theories emphasizing the attentional salience of object movement and spatial relationships, we predicted that human-reared monkeys would better match events in which a human demonstrator moved an object into a new relation with another object or surface than other kinds of actions. Three human-reared capuchins were invited repeatedly by a familiar human to perform a fixed set of actions upon objects or upon their bodies, using the "Do as I do" procedure. Actions directed at the body were matched less reliably than actions involving objects, and actions were matched best when the monkey looked at the demonstration for at least 2 sec and performed its action within a few seconds after the demonstration. The most commonly matched actions were those that one monkey performed relatively often when the experiment began. One monkey partially reproduced three novel actions (out of 48 demonstrations), all three involving moving or placing objects, and two of which it also performed following other demonstrations. These findings contribute convergent evidence that capuchin monkeys display social facilitation of activity, enhanced interest in particular objects and emulation of spatial outcomes. This pattern can support the development of shared manipulative skills, as evident in traditions of foraging and tool use in natural settings. The findings do not suggest that human rearing substantively altered capuchins' ability or interest in matching the actions of a familiar human, although visual attention to the human demonstrator may have been greater in these monkeys than in normally reared monkeys.  相似文献   

7.
Recent studies combining psychophysical and neurophysiological experiments in behaving monkeys have provided new insights into how several cortical areas integrate efforts to solve a vibrotactile discrimination task. In particular, these studies have addressed how neural codes are related to perception, working memory and decision making in this model. The primary somatosensory cortex drives higher cortical areas where past and current sensory information are combined, such that a comparison of the two evolves into a behavioural decision. These and other observations in visual tasks indicate that decisions emerge from highly-distributed processes in which the details of a scheduled motor plan are gradually specified by sensory information.  相似文献   

8.
Single-unit recordings from behaving monkeys and human functional magnetic resonance imaging studies have continued to provide a host of experimental data on the properties and mechanisms of object recognition in cortex. Recent advances in object recognition, spanning issues regarding invariance, selectivity, representation and levels of recognition have allowed us to propose a putative model of object recognition in cortex.  相似文献   

9.
With intensive training, human can achieve impressive behavioral improvement on various perceptual tasks. This phenomenon, termed perceptual learning, has long been considered as a hallmark of the plasticity of sensory neural system. Not surprisingly, high-level vision, such as object perception, can also be improved by perceptual learning. Here we review recent psychophysical, electrophysiological, and neuroimaging studies investigating the effects of training on object selective cortex, such as monkey inferior temporal cortex and human lateral occipital area. Evidences show that learning leads to an increase in object selectivity at the single neuron level and/or the neuronal population level. These findings indicate that high-level visual cortex in humans is highly plastic and visual experience can strongly shape neural functions of these areas. At the end of the review, we discuss several important future directions in this area.  相似文献   

10.
Can nonhuman animals attend to visual stimuli as whole, coherent objects? We investigated this question by adapting for use with pigeons a task in which human participants must report whether two visual attributes belong to the same object (one-object trial) or to different objects (two-object trial). We trained pigeons to discriminate a pair of differently colored shapes that had two targets either on a single object or on two different objects. Each target equally often appeared on the one-object and two-object stimuli; therefore, a specific target location could not serve as a discriminative cue. The pigeons learned to report whether the two target dots were located on a single object or on two different objects; follow-up tests demonstrated that this ability was not entirely based on memorization of the dot patterns and locations. Additional tests disclosed predominate stimulus control by the color, but not by the shape of the two objects. These findings suggest that human psychophysical methods are readily applicable to the study of object discrimination by nonhuman animals.  相似文献   

11.
Transmission of neural signals in the brain takes time due to the slow biological mechanisms that mediate it. During such delays, the position of moving objects can change substantially. The brain could use statistical regularities in the natural world to compensate neural delays and represent moving stimuli closer to real time. This possibility has been explored in the context of the flash lag illusion, where a briefly flashed stimulus in alignment with a moving one appears to lag behind the moving stimulus. Despite numerous psychophysical studies, the neural mechanisms underlying the flash lag illusion remain poorly understood, partly because it has never been studied electrophysiologically in behaving animals. Macaques are a prime model for such studies, but it is unknown if they perceive the illusion. By training monkeys to report their percepts unbiased by reward, we show that they indeed perceive the illusion qualitatively similar to humans. Importantly, the magnitude of the illusion is smaller in monkeys than in humans, but it increases linearly with the speed of the moving stimulus in both species. These results provide further evidence for the similarity of sensory information processing in macaques and humans and pave the way for detailed neurophysiological investigations of the flash lag illusion in behaving macaques.  相似文献   

12.
Altruism is an evolutionary puzzle. To date, much debate has focused on whether helping others without regard to oneself is a uniquely human behaviour, with a variety of empirical studies demonstrating a lack of altruistic behaviour in chimpanzees even when the demands of behaving altruistically seem minimal. By contrast, a recent experiment has demonstrated that chimpanzees will help a human experimenter to obtain an out-of-reach object, irrespective of whether or not they are offered a reward for doing so, suggesting that the cognitions underlying altruistic behaviour may be highly sensitive to situational demands. Here, we examine the cognitive demands of other-regarding behaviour by testing the conditions under which primates more distantly related to humans--capuchin monkeys--help an experimenter to obtain an out-of-reach object. Like chimpanzees, capuchin monkeys helped human experimenters even in the absence of a reward, but capuchins systematically failed to take into account the perspective of others when they stood to obtain food for themselves. These results suggest an important role for perspective taking and inhibition in altruistic behaviour and seem to reflect a significant evolutionary development in the roots of altruism, and specifically in other-regarding behaviour, between the divergence of New World monkeys and apes.  相似文献   

13.
In recent years, recording neuronal activity in the awake, behaving primate brain has become established as one of the major tools available to study the neuronal specificity of the initiation and control of various behaviors. Primates have traditionally been used in these studies because of their ability to perform more complex behaviors closely akin to those of humans, a desirable prerequisite since our ultimate aim is to elucidate the neuronal correlates of human behaviors. A wealth of knowledge has accumulated on the sensory and motor systems such as vision, audition, and eye movements. For more demanding behaviors where the main focus has been on attention, recordings in awake primates have begun to yield valuable data on the centers of the brain that are reactive to different attributes of this behavior. As a result, various hypotheses of the origin and distribution of attentional effects have evolved. For instance, visual attentional effects have been described not only in the higher cortical area (V4) but also in areas earlier in the visual pathway which presumably involve a feedback mechanism in the latter region. Here we outline the ways in which we have successfully used these methods to make single-cell recordings in awake macaques to show how certain behavioral paradigms affect neurons of the thalamus (with emphasis on the lateral geniculate nucleus). As we have done with established techniques these methods can be readily adapted to incorporate most behaviors needed to be tested and allow recordings to be made in virtually any part of the brain.  相似文献   

14.
Leiser SC  Moxon KA 《Neuron》2007,53(1):117-133
Rats use their whiskers to locate and discriminate tactile features of their environment. Mechanoreceptors surrounding each whisker encode and transmit sensory information from the environment to the brain via afferents whose cell bodies lie in the trigeminal ganglion (Vg). These afferents are classified as rapidly (RA) or slowly (SA) adapting by their response to stimulation. The activity of these cells in the awake behaving rat is yet unknown. Therefore, we developed a method to chronically record Vg neurons during natural whisking behaviors and found that all cells exhibited (1) no neuronal activity when the whiskers were not in motion, (2) increased activity when the rat whisked, with activity correlated to whisk frequency, and (3) robust increases in activity when the whiskers contacted an object. Moreover, we observed distinct differences in the firing rates between RA and SA cells, suggesting that they encode distinct aspects of stimuli in the awake rat.  相似文献   

15.
BACKGROUND: The perceptual ability of humans and monkeys to identify objects in the presence of noise varies systematically and monotonically as a function of how much noise is introduced to the visual display. That is, it becomes more and more difficult to identify an object with increasing noise. Here we examine whether the blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) signal in anesthetized monkeys also shows such monotonic tuning. We employed parametric stimulus sets containing natural images and noise patterns matched for spatial frequency and intensity as well as intermediate images generated by interpolation between natural images and noise patterns. Anesthetized monkeys provide us with the unique opportunity to examine visual processing largely in the absence of top-down cognitive modulations and can thus provide an important baseline against which work with awake monkeys and humans can be compared. RESULTS: We measured BOLD activity in occipital visual cortical areas as natural images and noise patterns, as well as intermediate interpolated patterns at three interpolation levels (25%, 50%, and 75%) were presented to anesthetized monkeys in a block paradigm. We observed reliable visual activity in occipital visual areas including V1, V2, V3, V3A, and V4 as well as the fundus and anterior bank of the superior temporal sulcus (STS). Natural images consistently elicited higher BOLD levels than noise patterns. For intermediate images, however, we did not observe monotonic tuning. Instead, we observed a characteristic V-shaped noise-tuning function in primary and extrastriate visual areas. BOLD signals initially decreased as noise was added to the stimulus but then increased again as the pure noise pattern was approached. We present a simple model based on the number of activated neurons and the strength of activation per neuron that can account for these results. CONCLUSIONS: We show that, for our parametric stimulus set, BOLD activity varied nonmonotonically as a function of how much noise was added to the visual stimuli, unlike the perceptual ability of humans and monkeys to identify such stimuli. This raises important caveats for interpreting fMRI data and demonstrates the importance of assessing not only which neural populations are activated by contrasting conditions during an fMRI study, but also the strength of this activation. This becomes particularly important when using the BOLD signal to make inferences about the relationship between neural activity and behavior.  相似文献   

16.
The human visual system uses texture information to automatically, or pre-attentively, segregate parts of the visual scene. We investigate the neural substrate underlying human texture processing using a computational model that consists of a hierarchy of bi-directionally linked model areas. The model builds upon two key hypotheses, namely that (i) texture segregation is based on boundary detection--rather than clustering of homogeneous items--and (ii) texture boundaries are detected mainly on the basis of a large scenic context that is analyzed by higher cortical areas within the ventral visual pathway, such as area V4. Here, we focus on the interpretation of key results from psychophysical studies on human texture segmentation. In psychophysical studies, texture patterns were varied along several feature dimensions to systematically characterize human performance. We use simulations to demonstrate that the activation patterns of our model directly correlate with the psychophysical results. This allows us to identify the putative neural mechanisms and cortical key areas which underlie human behavior. In particular, we investigate (i) the effects of varying texture density on target saliency, and the impact of (ii) element alignment and (iii) orientation noise on the detectability of a pop-out bar. As a result, we demonstrate that the dependency of target saliency on texture density is linked to a putative receptive field organization of orientation-selective neurons in V4. The effect of texture element alignment is related to grouping mechanisms in early visual areas. Finally, the modulation of cell activity by feedback activation from higher model areas, interacting with mechanisms of intra-areal center-surround competition, is shown to result in the specific suppression of noise-related cell activities and to improve the overall model capabilities in texture segmentation. In particular, feedback interaction is crucial to raise the model performance to the level of human observers.  相似文献   

17.
We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.  相似文献   

18.
Cuttlefish rapidly change their appearance in order to camouflage on a given background in response to visual parameters, giving us access to their visual perception. Recently, it was shown that isolated edge information is sufficient to elicit a body pattern very similar to that used when a whole object is present. Here, we examined contour completion in cuttlefish by assaying body pattern responses to artificial backgrounds of 'objects' formed from fragmented circles, these same fragments rotated on their axis, and with the fragments scattered over the background, as well as positive (full circles) and negative (homogenous background) controls. The animals displayed similar responses to the full and fragmented circles, but used a different body pattern in response to the rotated and scattered fragments. This suggests that they completed the broken circles and recognized them as whole objects, whereas rotated and scattered fragments were instead interpreted as small, individual objects in their own right. We discuss our findings in the context of achieving accurate camouflage in the benthic shallow-water environment.  相似文献   

19.
Liu J  Newsome WT 《Current biology : CB》2000,10(16):R598-R600
Whether mental operations can be reduced to the biological properties of the brain has intrigued scientists and philosophers alike for millennia. New microstimulation experiments on awake, behaving monkeys establish causality between activity of specialized cortical neurons and a controlled behavior.  相似文献   

20.
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号