首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Li Y  Wang G  Long J  Yu Z  Huang B  Li X  Yu T  Liang C  Li Z  Sun P 《PloS one》2011,6(6):e20801
One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.  相似文献   

2.
Rigoulot S  Pell MD 《PloS one》2012,7(1):e30740
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.  相似文献   

3.
New episodic memories are retained better if learning is followed by a few minutes of wakeful rest than by the encoding of novel external information. Novel encoding is said to interfere with the consolidation of recently acquired episodic memories. Here we report four experiments in which we examined whether autobiographical thinking, i.e. an ‘internal’ memory activity, also interferes with episodic memory consolidation. Participants were presented with three wordlists consisting of common nouns; one list was followed by wakeful rest, one by novel picture encoding and one by autobiographical retrieval/future imagination, cued by concrete sounds. Both novel encoding and autobiographical retrieval/future imagination lowered wordlist retention significantly. Follow-up experiments demonstrated that the interference by our cued autobiographical retrieval/future imagination delay condition could not be accounted for by the sound cues alone or by executive retrieval processes. Moreover, our results demonstrated evidence of a temporal gradient of interference across experiments. Thus, we propose that rich autobiographical retrieval/future imagination hampers the consolidation of recently acquired episodic memories and that such interference is particularly likely in the presence of external concrete cues.  相似文献   

4.
In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object): the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar). Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.  相似文献   

5.
Age-related changes in autobiographical memory (AM) recall are characterized by a decline in episodic details, while semantic aspects are spared. This deleterious effect is supposed to be mediated by an inefficient recruitment of executive processes during AM retrieval. To date, contrasting evidence has been reported on the neural underpinning of this decline, and none of the previous studies has directly compared the episodic and semantic aspects of AM in elderly. We asked 20 young and 17 older participants to recall specific and general autobiographical events (i.e., episodic and semantic AM) elicited by personalized cues while recording their brain activity by means of fMRI. At the behavioral level, we confirmed that the richness of episodic AM retrieval is specifically impoverished in aging and that this decline is related to the reduction of executive functions. At the neural level, in both age groups, we showed the recruitment of a large network during episodic AM retrieval encompassing prefrontal, cortical midline and posterior regions, and medial temporal structures, including the hippocampus. This network was very similar, but less extended, during semantic AM retrieval. Nevertheless, a greater activity was evidenced in the dorsal anterior cingulate cortex (dACC) during episodic, compared to semantic AM retrieval in young participants, and a reversed pattern in the elderly. Moreover, activity in dACC during episodic AM retrieval was correlated with inhibition and richness of memories in both groups. Our findings shed light on the direct link between episodic AM retrieval, executive control, and their decline in aging, proposing a possible neuronal signature. They also suggest that increased activity in dACC during semantic AM retrieval in the elderly could be seen as a compensatory mechanism underpinning successful AM performance observed in aging. These results are discussed in the framework of recently proposed models of neural reorganization in aging.  相似文献   

6.
Multimodal integration, which mainly refers to multisensory facilitation and multisensory inhibition, is the process of merging multisensory information in the human brain. However, the neural mechanisms underlying the dynamic characteristics of multimodal integration are not fully understood. The objective of this study is to investigate the basic mechanisms of multimodal integration by assessing the intermodal influences of vision, audition, and somatosensory sensations (the influence of multisensory background events to the target event). We used a timed target detection task, and measured both behavioral and electroencephalographic responses to visual target events (green solid circle), auditory target events (2 kHz pure tone) and somatosensory target events (1.5 ± 0.1 mA square wave pulse) from 20 normal participants. There were significant differences in both behavior performance and ERP components when comparing the unimodal target stimuli with multimodal (bimodal and trimodal) target stimuli for all target groups. Significant correlation among reaction time and P3 latency was observed across all target conditions. The perceptual processing of auditory target events (A) was inhibited by the background events, while the perceptual processing of somatosensory target events (S) was facilitated by the background events. In contrast, the perceptual processing of visual target events (V) remained impervious to multisensory background events.  相似文献   

7.
Diverse a nimal species use multimodal communica tion signals to coordina te reproductive behavior.Despite active research in this field,the brain mechanisms underlying multimodal communication remain poorly understood.Similar to humans and many mammalian species,anurans often produce auditory signals accompanied by conspicuous visual cues(e.g.,vocal sac inflation).In this study,we used video playbacks to determine the role of vocal-sac inflation in little torrent frogs(Amolops torrentis).Then we exposed females to blank,visual,auditory,and audiovisual stimuli and analyzed whole brain tissue gene expression changes using RNAseq.The results showed that both auditory cues(i.e.,male advertisement calls)and visual cues were attractive to female frogs,although auditory cues were more attractive than visual cues.Females preferred simultaneous bimodal cues to unimodal cues.The hierarchical clustering of differentially expressed genes showed a close relationship between neurogenomic states and momentarily expressed sexual signals.We also found that the Gene Ontology terms and KEGG pathways involved in energy metabolism were mostly increased in blank contrast versus visual,acoustic,or audiovisual stimuli,indicating that brain energy use may play an important role in response to these stimuli.In sum,behavioral and neurogenomic responses to acoustic and visual cues are correlated in female little torrent frogs.  相似文献   

8.
Autobiographical memory refers to events and information about personal life and the self. Within autobiographical memory, many authors make a difference between episodic and semantic components. Study of retrograde amnesia gives information about memory consolidation. According to the "standard model" of consolidation, the medial temporal lobe plays a time-limited role in retrieval memory. Functional neuroanatomy studies of autobiographical memory are very few and many are recent. These studies concern which brain regions are involved in the autobiographical retrieval, episodic or semantic autobiographical memory and consolidation process. Results show that autobiographical retrieval depends on specific brain regions like frontal cortex. Concerning memory consolidation, findings are most consistent with the idea that hippocampal complex is involved in both recent and remote memories.  相似文献   

9.
Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.  相似文献   

10.
Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via γ-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).  相似文献   

11.
Gold BT  Buckner RL 《Neuron》2002,35(4):803-812
One of the most ubiquitous findings in functional neuroimaging research is activation of left inferior prefrontal cortex (LIPC) during tasks requiring controlled semantic retrieval. Here we show that LIPC participates in the controlled retrieval of nonsemantic representations as well as semantic representations. Results also demonstrate that LIPC coactivates with dissociable posterior regions depending on the information retrieved: activating with left temporal cortex during the controlled retrieval of semantics and with left posterior frontal and parietal cortex during the controlled retrieval of phonology. Correlation of performance to LIPC activation suggests a processing role associated with mapping relatively ambiguous stimulus-to-representation relationships during both semantic and phonological tasks. These findings suggest that LIPC participates in controlled processing across multiple information domains collaborating with dissociable posterior regions depending upon the kind of information retrieved.  相似文献   

12.
Memory for events and their spatial context: models and experiments   总被引:6,自引:0,他引:6  
The computational role of the hippocampus in memory has been characterized as: (i) an index to disparate neocortical storage sites; (ii) a time-limited store supporting neocortical long-term memory; and (iii) a content-addressable associative memory. These ideas are reviewed and related to several general aspects of episodic memory, including the differences between episodic, recognition and semantic memory, and whether hippocampal lesions differentially affect recent or remote memories. Some outstanding questions remain, such as: what characterizes episodic retrieval as opposed to other forms of read-out from memory; what triggers the storage of an event memory; and what are the neural mechanisms involved? To address these questions a neural-level model of the medial temporal and parietal roles in retrieval of the spatial context of an event is presented. This model combines the idea that retrieval of the rich context of real-life events is a central characteristic of episodic memory, and the idea that medial temporal allocentric representations are used in long-term storage while parietal egocentric representations are used to imagine, manipulate and re-experience the products of retrieval. The model is consistent with the known neural representation of spatial information in the brain, and provides an explanation for the involvement of Papez''s circuit in both the representation of heading direction and in the recollection of episodic information. Two experiments relating to the model are briefly described. A functional neuroimaging study of memory for the spatial context of life-like events in virtual reality provides support for the model''s functional localization. A neuropsychological experiment suggests that the hippocampus does store an allocentric representation of spatial locations.  相似文献   

13.
Neural basis of the ventriloquist illusion   总被引:1,自引:0,他引:1  
The ventriloquist creates the illusion that his or her voice emerges from the visibly moving mouth of the puppet [1]. This well-known illusion exemplifies a basic principle of how auditory and visual information is integrated in the brain to form a unified multimodal percept. When auditory and visual stimuli occur simultaneously at different locations, the more spatially precise visual information dominates the perceived location of the multimodal event. Previous studies have examined neural interactions between spatially disparate auditory and visual stimuli [2-5], but none has found evidence for a visual influence on the auditory cortex that could be directly linked to the illusion of a shifted auditory percept. Here we utilized event-related brain potentials combined with event-related functional magnetic resonance imaging to demonstrate on a trial-by-trial basis that a precisely timed biasing of the left-right balance of auditory cortex activity by the discrepant visual input underlies the ventriloquist illusion. This cortical biasing may reflect a fundamental mechanism for integrating the auditory and visual components of environmental events, which ensures that the sounds are adaptively localized to the more reliable position provided by the visual input.  相似文献   

14.
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Cross-modal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.  相似文献   

15.
In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny  =  visual features) or different modalities (e.g., silver, loud  =  visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4–6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.  相似文献   

16.
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.  相似文献   

17.
Temporal information is often contained in multi-sensory stimuli, but it is currently unknown how the brain combines e.g. visual and auditory cues into a coherent percept of time. The existing studies of cross-modal time perception mainly support the "modality appropriateness hypothesis", i.e. the domination of auditory temporal cues over visual ones because of the higher precision of audition for time perception. However, these studies suffer from methodical problems and conflicting results. We introduce a novel experimental paradigm to examine cross-modal time perception by combining an auditory time perception task with a visually guided motor task, requiring participants to follow an elliptic movement on a screen with a robotic manipulandum. We find that subjective duration is distorted according to the speed of visually observed movement: The faster the visual motion, the longer the perceived duration. In contrast, the actual execution of the arm movement does not contribute to this effect, but impairs discrimination performance by dual-task interference. We also show that additional training of the motor task attenuates the interference, but does not affect the distortion of subjective duration. The study demonstrates direct influence of visual motion on auditory temporal representations, which is independent of attentional modulation. At the same time, it provides causal support for the notion that time perception and continuous motor timing rely on separate mechanisms, a proposal that was formerly supported by correlational evidence only. The results constitute a counterexample to the modality appropriateness hypothesis and are best explained by Bayesian integration of modality-specific temporal information into a centralized "temporal hub".  相似文献   

18.
Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.  相似文献   

19.
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.  相似文献   

20.
Spatiotemporal dynamics of modality-specific and supramodal word processing   总被引:13,自引:0,他引:13  
The ability of written and spoken words to access the same semantic meaning provides a test case for the multimodal convergence of information from sensory to associative areas. Using anatomically constrained magnetoencephalography (aMEG), the present study investigated the stages of word comprehension in real time in the auditory and visual modalities, as subjects participated in a semantic judgment task. Activity spread from the primary sensory areas along the respective ventral processing streams and converged in anterior temporal and inferior prefrontal regions, primarily on the left at around 400 ms. Comparison of response patterns during repetition priming between the two modalities suggest that they are initiated by modality-specific memory systems, but that they are eventually elaborated mainly in supramodal areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号