首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Auditory-visual crossmodal integration in perception of face gender
Authors:Smith Eric L  Grabowecky Marcia  Suzuki Satoru
Institution:Department of Psychology and Institute for Neuroscience, Northwestern University, Evanston, Illinois 60208, USA.
Abstract:Whereas extensive neuroscientific and behavioral evidence has confirmed a role of auditory-visual integration in representing space 1-6], little is known about the role of auditory-visual integration in object perception. Although recent neuroimaging results suggest integrated auditory-visual object representations 7-11], substantiating behavioral evidence has been lacking. We demonstrated auditory-visual integration in the perception of face gender by using pure tones that are processed in low-level auditory brain areas and that lack the spectral components that characterize human vocalization. When androgynous faces were presented together with pure tones in the male fundamental-speaking-frequency range, faces were more likely to be judged as male, whereas when faces were presented with pure tones in the female fundamental-speaking-frequency range, they were more likely to be judged as female. Importantly, when participants were explicitly asked to attribute gender to these pure tones, their judgments were primarily based on relative pitch and were uncorrelated with the male and female fundamental-speaking-frequency ranges. This perceptual dissociation of absolute-frequency-based crossmodal-integration effects from relative-pitch-based explicit perception of the tones provides evidence for a sensory integration of auditory and visual signals in representing human gender. This integration probably develops because of concurrent neural processing of visual and auditory features of gender.
Keywords:
本文献已被 ScienceDirect PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号