首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Perceptual aftereffects following adaptation to simple stimulus attributes (e.g., motion, color) have been studied for hundreds of years. A striking recent discovery was that adaptation also elicits contrastive aftereffects in visual perception of complex stimuli and faces [1-6]. Here, we show for the first time that adaptation to nonlinguistic information in voices elicits systematic auditory aftereffects. Prior adaptation to male voices causes a voice to be perceived as more female (and vice versa), and these auditory aftereffects were measurable even minutes after adaptation. By contrast, crossmodal adaptation effects were absent, both when male or female first names and when silently articulating male or female faces were used as adaptors. When sinusoidal tones (with frequencies matched to male and female voice fundamental frequencies) were used as adaptors, no aftereffects on voice perception were observed. This excludes explanations for the voice aftereffect in terms of both pitch adaptation and postperceptual adaptation to gender concepts and suggests that contrastive voice-coding mechanisms may routinely influence voice perception. The role of adaptation in calibrating properties of high-level voice representations indicates that adaptation is not confined to vision but is a ubiquitous mechanism in the perception of nonlinguistic social information from both faces and voices.  相似文献   

2.
Theoretical considerations and early empirical findings suggested facial width-to-height ratio (fWHR) may be relevant to person perception because it is associated with behavioral dispositions. More recent evidence failing to find fWHR-behavior links suggests that mismatch or byproduct hypotheses may be necessary to explain fWHR-based trait inferences; however, these explanations may not be needed because it is not clear that fWHR is reliably associated with trait inferences. To investigate the robustness of fWHR-inference links, we conducted secondary analyses of a cross-national dataset consisting of ratings by 11,481 participants across 11 world regions who judged 60 male and 60 female faces on one of 13 social traits (ns per trait range from 760 to 975). In preregistered analyses—and exploratory analyses of a subset of traits in the larger sample of 597 faces from which the 120 faces were drawn—we found mixed evidence for fWHR-based social judgments. In multilevel models, fWHR was not reliably linked to raters' judgments of male faces for any of the 13 trait-inferences but was negatively associated with ratings of female faces' dominance, trustworthiness, sociability, emotional stability, responsibility, confidence, attractiveness, and intelligence. In exploratory analyses of a subset of traits using the larger sample of faces, fWHR was associated positively with perceptions of meanness and aggressiveness in male but not female faces, negatively with attractiveness and dominance in female but not male faces, and negatively with trustworthiness in male but not female faces. We interpret these mixed findings to suggest that (1) fWHR-inference links are likely to be smaller and less reliable than expected from prior research; (2) fWHR may play a larger role in perceptions of female faces than would be predicted from the theory underpinning fWHR hypotheses; and (3) future research should more closely examine the extent to which robust fWHR-inferences reflect mismatch in the reliability of fWHR-behavior links between ancestral and modern environments versus byproducts of other person perception mechanisms.  相似文献   

3.
Face perception is modulated by sexual preference   总被引:1,自引:0,他引:1  
Face perception is mediated by a distributed neural system in the human brain . The response to faces is modulated by cognitive factors such as attention, visual imagery, and emotion ; however, the effects of gender and sexual orientation are currently unknown. We used fMRI to test whether subjects would respond more to their sexually preferred faces and predicted such modulation in the reward circuitry. Forty heterosexual and homosexual men and women viewed photographs of male and female faces and assessed facial attractiveness. Regardless of their gender and sexual orientation, all subjects similarly rated the attractiveness of both male and female faces. Within multiple, bilateral face-selective regions in the visual cortex, limbic system, and prefrontal cortex, similar patterns of activation were found in all subjects in response to both male and female faces. Consistent with our hypothesis, we found a significant interaction between stimulus gender and the sexual preference of the subject in the thalamus and medial orbitofrontal cortex, where heterosexual men and homosexual women responded more to female faces and heterosexual women and homosexual men responded more to male faces. Our findings suggest that sexual preference modulates face-evoked activation in the reward circuitry.  相似文献   

4.
Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.  相似文献   

5.
The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.  相似文献   

6.
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.  相似文献   

7.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

8.
应用常规电生理学细胞外记录技术,研究了生后3周龄幼年大鼠皮层听-视双模态神经元及听-视信息整合特性,并与成年动物进行对照。在听皮层的背侧,听皮层和视皮层的交界处,即颞-顶-枕联合皮层区,共记录到了324个神经元,其中45个为听-视双模态神经元,占13.9%,远低于成年动物双模态神经元所占比例(42.8%)。这些双模态神经元可分为A-V型,v-A型和a-V型3种类型。根据它们对听-视信息的整合效应,可分为增强型、抑制型和调制型。整合效应与给予的声和光组合刺激的时间间隔有关,以获得整合效应的时间间隔范围为整合时间窗,幼年动物的平均整合时间窗为11.9 ms,远小于成年动物的整合时间窗(平均为23.2 ms)。结果提示,与单模态感觉神经元对模态特异性反应特性一样,皮层听-视双模态神经元生后有一个发育、成熟的过程。研究结果为深入研究中枢神经元多感觉整合机制提供了重要实验资料。  相似文献   

9.
Mosquitoes hear with their antennae, which in most species are sexually dimorphic. Johnston, who discovered the mosquito auditory organ at the base of the antenna 150 years ago, speculated that audition was involved with mating behaviour. Indeed, male mosquitoes are attracted to female flight tones. The male auditory organ has been proposed to act as an acoustic filter for female flight tones, but female auditory behavior is unknown. We show, for the first time, interactive auditory behavior between males and females that leads to sexual recognition. Individual males and females both respond to pure tones by altering wing-beat frequency. Behavioral auditory tuning curves, based on minimum threshold sound levels that elicit a change in wing-beat frequency to pure tones, are sharper than the mechanical tuning of the antennae, with males being more sensitive than females. We flew opposite-sex pairs of tethered Toxorhynchites brevipalpis and found that each mosquito alters its wing-beat frequency in response to the flight tone of the other, so that within seconds their flight-tone frequencies are closely matched, if not completely synchronized. The flight tones of same-sex pairs may converge in frequency but eventually diverge dramatically.  相似文献   

10.
Implicit multisensory associations influence voice recognition   总被引:4,自引:1,他引:3       下载免费PDF全文
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.  相似文献   

11.
Watching a speaker''s facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.  相似文献   

12.
The identity of an object is a fixed property, independent of where it appears, and an effective visual system should capture this invariance [1-3]. However, we now report that the perceived gender of a face is strongly biased toward male or female at different locations in the visual field. The spatial pattern of these biases was distinctive and stable for each individual. Identical neutral faces looked different when they were presented simultaneously at locations maximally biased to opposite genders. A similar effect was observed for perceived age of faces. We measured the magnitude of this perceptual heterogeneity for four other visual judgments: perceived aspect ratio, orientation discrimination, spatial-frequency discrimination, and color discrimination. The effect was sizeable for the aspect ratio task but substantially smaller for the other three tasks. We also evaluated perceptual heterogeneity for facial gender and orientation tasks at different spatial scales. Strong heterogeneity was observed even for the orientation task when tested at small scales. We suggest that perceptual heterogeneity is a general property of visual perception and results from undersampling of the visual signal at spatial scales that are small relative to the size of the receptive fields associated with each visual attribute.  相似文献   

13.
14.
When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.  相似文献   

15.
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.  相似文献   

16.
An analysis of airplane accidents reveals that pilots sometimes purely fail to react to critical auditory alerts. This inability of an auditory stimulus to reach consciousness has been coined under the term of inattentional deafness. Recent data from literature tends to show that tasks involving high cognitive load consume most of the attentional capacities, leaving little or none remaining for processing any unexpected information. In addition, there is a growing body of evidence for a shared attentional capacity between vision and hearing. In this context, the abundant information in modern cockpits is likely to produce inattentional deafness. We investigated this hypothesis by combining electroencephalographic (EEG) measurements with an ecological aviation task performed under contextual variation of the cognitive load (high or low), including an alarm detection task. Two different audio tones were played: standard tones and deviant tones. Participants were instructed to ignore standard tones and to report deviant tones using a response pad. More than 31% of the deviant tones were not detected in the high load condition. Analysis of the EEG measurements showed a drastic diminution of the auditory P300 amplitude concomitant with this behavioral effect, whereas the N100 component was not affected. We suggest that these behavioral and electrophysiological results provide new insights on explaining the trend of pilots’ failure to react to critical auditory information. Relevant applications concern prevention of alarms omission, mental workload measurements and enhanced warning designs.  相似文献   

17.
Auditory processing in primate cerebral cortex.   总被引:22,自引:0,他引:22  
Auditory information is relayed from the ventral nucleus of the medial geniculate complex to a core of three primary or primary-like areas of auditory cortex that are cochleotopically organized and highly responsive to pure tones. Auditory information is then distributed from the core areas to a surrounding belt of about seven areas that are less precisely cochleotopic and generally more responsive to complex stimuli than tones. Recent studies indicate that the belt areas relay to the rostral and caudal divisions of a parabelt region at a third level of processing in the cortex lateral to the belt. The parabelt and belt regions have additional inputs from dorsal and magnocellular divisions of the medial geniculate complex and other parts of the thalamus. The belt and parabelt regions appear to be concerned with integrative and associative functions involved in pattern perception and object recognition. The parabelt fields connect with regions of temporal, parietal, and frontal cortex that mediate additional auditory functions, including space perception and auditory memory.  相似文献   

18.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

19.
This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.  相似文献   

20.
The influence of stimulus duration on auditory evoked potentials (AEPs) was examined for tones varying randomly in duration, location, and frequency in an auditory selective attention task. Stimulus duration effects were isolated as duration difference waves by subtracting AEPs to short duration tones from AEPs to longer duration tones of identical location, frequency and rise time. This analysis revealed that AEP components generally increased in amplitude and decreased in latency with increments in signal duration, with evidence of longer temporal integration times for lower frequency tones. Different temporal integration functions were seen for different N1 subcomponents. The results suggest that different auditory cortical areas have different temporal integration times, and that these functions vary as a function of tone frequency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号