首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Marois R  Leung HC  Gore JC 《Neuron》2000,25(3):717-728
The primate visual system is considered to be segregated into ventral and dorsal streams specialized for processing object identity and location, respectively. We reexamined the dorsal/ventral model using a stimulus-driven approach to object identity and location processing. While looking at repeated presentations of a standard object at a standard location, subjects monitored for any infrequent "oddball" changes in object identity, location, or identity and location (conjunction). While the identity and location oddballs preferentially activated ventral and dorsal brain regions respectively, each oddball type activated both pathways. Furthermore, all oddball types recruited the lateral temporal cortex and the temporo-parietal junction. These findings suggest that a strict dorsal/ventral dual-stream model does not fully account for the perception of novel objects in space.  相似文献   

2.
Cortical analysis of visual context   总被引:13,自引:0,他引:13  
Bar M  Aminoff E 《Neuron》2003,38(2):347-358
Objects in our environment tend to be grouped in typical contexts. How does the human brain analyze such associations between visual objects and their specific context? We addressed this question in four functional neuroimaging experiments and revealed the cortical mechanisms that are uniquely activated when people recognize highly contextual objects (e.g., a traffic light). Our findings indicate that a region in the parahippocampal cortex and a region in the retrosplenial cortex together comprise a system that mediates both spatial and nonspatial contextual processing. Interestingly, each of these regions has been identified in the past with two functions: the processing of spatial information and episodic memory. Attributing contextual analysis to these two areas, instead, provides a framework for bridging between previous reports.  相似文献   

3.
Learning the functional properties of objects is a core mechanism in the development of conceptual, cognitive and linguistic knowledge in children. The cerebral processes underlying these learning mechanisms remain unclear in adults and unexplored in children. Here, we investigated the neurophysiological patterns underpinning the learning of functions for novel objects in 10-year-old healthy children. Event-related fields (ERFs) were recorded using magnetoencephalography (MEG) during a picture-definition task. Two MEG sessions were administered, separated by a behavioral verbal learning session during which children learned short definitions about the “magical” function of 50 unknown non-objects. Additionally, 50 familiar real objects and 50 other unknown non-objects for which no functions were taught were presented at both MEG sessions. Children learned at least 75% of the 50 proposed definitions in less than one hour, illustrating children''s powerful ability to rapidly map new functional meanings to novel objects. Pre- and post-learning ERFs differences were analyzed first in sensor then in source space. Results in sensor space disclosed a learning-dependent modulation of ERFs for newly learned non-objects, developing 500–800 msec after stimulus onset. Analyses in the source space windowed over this late temporal component of interest disclosed underlying activity in right parietal, bilateral orbito-frontal and right temporal regions. Altogether, our results suggest that learning-related evolution in late ERF components over those regions may support the challenging task of rapidly creating new semantic representations supporting the processing of the meaning and functions of novel objects in children.  相似文献   

4.
Conceptual knowledge reflects our multi-modal ‘semantic database’. As such, it brings meaning to all verbal and non-verbal stimuli, is the foundation for verbal and non-verbal expression and provides the basis for computing appropriate semantic generalizations. Multiple disciplines (e.g. philosophy, cognitive science, cognitive neuroscience and behavioural neurology) have striven to answer the questions of how concepts are formed, how they are represented in the brain and how they break down differentially in various neurological patient groups. A long-standing and prominent hypothesis is that concepts are distilled from our multi-modal verbal and non-verbal experience such that sensation in one modality (e.g. the smell of an apple) not only activates the intramodality long-term knowledge, but also reactivates the relevant intermodality information about that item (i.e. all the things you know about and can do with an apple). This multi-modal view of conceptualization fits with contemporary functional neuroimaging studies that observe systematic variation of activation across different modality-specific association regions dependent on the conceptual category or type of information. A second vein of interdisciplinary work argues, however, that even a smorgasbord of multi-modal features is insufficient to build coherent, generalizable concepts. Instead, an additional process or intermediate representation is required. Recent multidisciplinary work, which combines neuropsychology, neuroscience and computational models, offers evidence that conceptualization follows from a combination of modality-specific sources of information plus a transmodal ‘hub’ representational system that is supported primarily by regions within the anterior temporal lobe, bilaterally.  相似文献   

5.
Many studies have demonstrated that the sensory and motor systems are activated during conceptual processing. Such results have been interpreted as indicating that concepts, and important aspects of cognition more broadly, are embodied. That conclusion does not follow from the empirical evidence. The reason why is that the empirical evidence can equally be accommodated by a 'disembodied' view of conceptual representation that makes explicit assumptions about spreading activation between the conceptual and sensory and motor systems. At the same time, the strong form of the embodied cognition hypothesis is at variance with currently available neuropsychological evidence. We suggest a middle ground between the embodied and disembodied cognition hypotheses--grounding by interaction. This hypothesis combines the view that concepts are, at some level, 'abstract' and 'symbolic', with the idea that sensory and motor information may 'instantiate' online conceptual processing.  相似文献   

6.
Implicit multisensory associations influence voice recognition   总被引:4,自引:1,他引:3       下载免费PDF全文
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.  相似文献   

7.
Gottfried JA  Smith AP  Rugg MD  Dolan RJ 《Neuron》2004,42(4):687-695
Episodic memory is often imbued with multisensory richness, such that the recall of an event can be endowed with the sights, sounds, and smells of its prior occurrence. While hippocampus and related medial temporal structures are implicated in episodic memory retrieval, the participation of sensory-specific cortex in representing the qualities of an episode is less well established. We combined functional magnetic resonance imaging (fMRI) with a cross-modal paradigm, where objects were presented with odors during memory encoding. We then examined the effect of odor context on neural responses at retrieval when these same objects were presented alone. Primary olfactory (piriform) cortex, as well as anterior hippocampus, was activated during the successful retrieval of old (compared to new) objects. Our findings indicate that sensory features of the original engram are preserved in unimodal olfactory cortex. We suggest that reactivation of memory traces distributed across modality-specific brain areas underpins the sensory qualities of episodic memories.  相似文献   

8.
Semantic memory and the brain: structure and processes   总被引:29,自引:0,他引:29  
Recent functional brain imaging studies suggest that object concepts may be represented, in part, by distributed networks of discrete cortical regions that parallel the organization of sensory and motor systems. In addition, different regions of the left lateral prefrontal cortex, and perhaps anterior temporal cortex, may have distinct roles in retrieving, maintaining and selecting semantic information.  相似文献   

9.
Previous studies have demonstrated task-related changes in brain activation and inter-regional connectivity but the temporal dynamics of functional properties of the brain during task execution is still unclear. In the present study, we investigated task-related changes in functional properties of the human brain network by applying graph-theoretical analysis to magnetoencephalography (MEG). Subjects performed a cue-target attention task in which a visual cue informed them of the direction of focus for incoming auditory or tactile target stimuli, but not the sensory modality. We analyzed the MEG signal in the cue-target interval to examine network properties during attentional control. Cluster-based non-parametric permutation tests with the Monte-Carlo method showed that in the cue-target interval, beta activity was desynchronized in the sensori-motor region including premotor and posterior parietal regions in the hemisphere contralateral to the attended side. Graph-theoretical analysis revealed that, in beta frequency, global hubs were found around the sensori-motor and prefrontal regions, and functional segregation over the entire network was decreased during attentional control compared to the baseline. Thus, network measures revealed task-related temporal changes in functional properties of the human brain network, leading to the understanding of how the brain dynamically responds to task execution as a network.  相似文献   

10.
Hypothesis is proposed that the human brain has the sensory system (ecoceptive sensory system) which responds to changes of the Earth electromagnetic fields (EEFs) and meteorologic factors (MFs). Acupuncture points which are activated easily by adequate somatosensory stimuli (mechanical, temperature) and electromagnetic fields (electropuncture, magnetopuncture) can be polymodal receptors of the ecoceptive sensory system. It is supposed that the sensory endings of acupuncture points are excited by sharp changes ef EEFs and MFs. Through the neuronal brain stem structures, especially through hypothalamus, acupuncture points excitation starts the adaptive mechanisms intended to compensate the brain functional systems deviations, provoked by prolonged EEFs and unsettled weather environmental influences.  相似文献   

11.
Tinsley CJ 《Bio Systems》2008,92(2):159-167
This article explores the theoretical basis of coding within topographic representations, where neurons encoding specific features such as locations, are arranged into maps. A novel type of representation, termed non-specific, where each neuron does not encode specific features is also postulated. In common with the previously described distributed representations [Rolls, E.T., Treves, A., 1998. Neural Networks and Brain Function. Oxford University Press, Oxford], topographic representations display an exponential relationship between stimuli encoded and both number of neurons and maximum firing rate of those neurons. The non-specific representations described here display a binomial expansion between the number of stimuli encoded and the sum of the number of neurons and the maximum firing rate; therefore groups of non-specific neurons usually encode less stimuli than equivalent topographic layers of neurons. Lower and higher order sensory regions of the brain use either topographic or distributed representations to encode information. It is proposed that non-specific representations may occur in regions of the brain where different types of information may be represented by the same neurons, as occurs in the prefrontal cortex.  相似文献   

12.
How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional roles of each region within the cortical network that mediates scene processing.  相似文献   

13.
Recordings from single cells in human medial temporal cortex confirm that sensory processing forms explicit neural representations of the objects and concepts needed for a causal model of the world.  相似文献   

14.
Serences JT  Boynton GM 《Neuron》2007,55(2):301-312
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.  相似文献   

15.
Neural information flow (NIF) provides a novel approach for system identification in neuroscience. It models the neural computations in multiple brain regions and can be trained end-to-end via stochastic gradient descent from noninvasive data. NIF models represent neural information processing via a network of coupled tensors, each encoding the representation of the sensory input contained in a brain region. The elements of these tensors can be interpreted as cortical columns whose activity encodes the presence of a specific feature in a spatiotemporal location. Each tensor is coupled to the measured data specific to a brain region via low-rank observation models that can be decomposed into the spatial, temporal and feature receptive fields of a localized neuronal population. Both these observation models and the convolutional weights defining the information processing within regions are learned end-to-end by predicting the neural signal during sensory stimulation. We trained a NIF model on the activity of early visual areas using a large-scale fMRI dataset recorded in a single participant. We show that we can recover plausible visual representations and population receptive fields that are consistent with empirical findings.  相似文献   

16.
Differences in the neural processing of six categories of pictorial stimuli (maps, body parts, objects, animals, famous faces and colours) were investigated using positron emission tomography. Stimuli were presented either with or without the written name of the picture, thereby creating a naming condition and a reading condition. As predicted, naming increased the demands on lexical processes. This was demonstrated by activation of the left temporal lobe in a posterior region associated with name retrieval in several previous studies. This lexical effect was common to all meaningful stimuli and no category-specific effects were observed for naming relative to reading. Nevertheless, category differences were found when naming and reading were considered together. Stimuli with greater visual complexity (animals, faces and maps) enhanced activation in the left extrastriate cortex. Furthermore, map recognition, which requires greater spatio-topographical processing, also activated the right occipito-parietal and parahippocampal cortices. These effects in the visuo-spatial regions emphasize inevitable differences in the perceptual properties of pictorial stimuli. In the semantic temporal regions, famous faces and objects enhanced activation in the left antero-lateral and postero-lateral cortices, respectively. In addition, we showed that the same posterior left temporal region is also activated by body parts. We conclude that category-specific brain activations depend more on differential processing at the perceptual and semantic levels rather than at the lexical retrieval level.  相似文献   

17.
Honda T  Hirashima M  Nozaki D 《PloS one》2012,7(5):e37900
Computational theory of motor control suggests that the brain continuously monitors motor commands, to predict their sensory consequences before actual sensory feedback becomes available. Such prediction error is a driving force of motor learning, and therefore appropriate associations between motor commands and delayed sensory feedback signals are crucial. Indeed, artificially introduced delays in visual feedback have been reported to degrade motor learning. However, considering our perceptual ability to causally bind our own actions with sensory feedback, demonstrated by the decrease in the perceived time delay following repeated exposure to an artificial delay, we hypothesized that such perceptual binding might alleviate deficits of motor learning associated with delayed visual feedback. Here, we evaluated this hypothesis by investigating the ability of human participants to adapt their reaching movements in response to a novel visuomotor environment with 3 visual feedback conditions--no-delay, sudden-delay, and adapted-delay. To introduce novelty into the trials, the cursor position, which originally indicated the hand position in baseline trials, was rotated around the starting position. In contrast to the no-delay condition, a 200-ms delay was artificially introduced between the cursor and hand positions during the presence of visual rotation (sudden-delay condition), or before the application of visual rotation (adapted-delay condition). We compared the learning rate (representing how the movement error modifies the movement direction in the subsequent trial) between the 3 conditions. In comparison with the no-delay condition, the learning rate was significantly degraded for the sudden-delay condition. However, this degradation was significantly alleviated by prior exposure to the delay (adapted-delay condition). Our data indicate the importance of appropriate temporal associations between motor commands and sensory feedback in visuomotor learning. Moreover, they suggest that the brain is able to account for such temporal associations in a flexible manner.  相似文献   

18.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

19.
Motor learning     
Bilateral damage of the medial temporal lobe system prevents the formation of new declarative memories but leaves intact knowledge that was acquired before damage. For motor learning, no structure has been identified that plays a comparable role for the consolidation of motor memories. The deficits of motor learning are focal and show a similar allocation to the various of motor learning are focal and show a similar allocation to the various sensorimotor subsystems, as do the corresponding non-mnemonic functions. The involvement of sensorimotor circuitries changes during motor learning so that association areas are preferentially activated in the early stages, and cerebello- and striato-motor-cortical loops are preferentially activated in the late stages of motor learning. Recent neuroanatomical and neurophysiological findings on the effects of brain lesions in human and non-human primates are discussed.  相似文献   

20.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号