首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.  相似文献   

2.
A two-process probabilistic theory of emotion perception based on a non-linear combination of facial features is presented. Assuming that the upper and the lower part of the face function as the building blocks at the basis of emotion perception, an empirical test is provided with fear and happiness as target emotions. Subjects were presented with prototypical fearful and happy faces and with computer-generated chimerical expressions that were a combination of happy and fearful. Subjects were asked to indicate the emotions they perceive using an extensive list of emotions. We show that some emotions require a conjunction of the two halves of a face to be perceived, whereas for some other emotions only one half is sufficient. We demonstrate that chimerical faces give rise to the perception of genuine emotions. The findings provide evidence that different combinations of the two halves of a fearful and a happy face, either congruent or not, do generate the perception of emotions other than fear and happiness.  相似文献   

3.
We investigated whether the perception of the crispness and staleness of potato chips can be affected by modifying the sounds produced during the biting action. Participants in our study bit into potato chips with their front teeth while rating either their crispness or freshness using a computer‐based visual analog scale. The results demonstrate that the perception of both the crispness and staleness was systematically altered by varying the loudness and/or frequency composition of the auditory feedback elicited during the biting action. The potato chips were perceived as being both crisper and fresher when either the overall sound level was increased, or when just the high frequency sounds (in the range of 2 kHz?20 kHz) were selectively amplified. These results highlight the significant role that auditory cues can play in modulating the perception and evaluation of foodstuffs (despite the fact that consumers are often unaware of the influence of such auditory cues). The paradigm reported here also provides a novel empiric methodology for assessing such multisensory contributions to food perception.  相似文献   

4.
The deteriorating state of the environment and global warming pose a serious and unprecedented threat to humanity. Yet, public response and personal behavior do not reflect the proportions of such a threat. In the present research we explored possible reasons for this discrepancy. Past research has shown that people perceive events as more threatening based on their immediacy, certainty, or personal implications. Liberman and Trope (2008) developed the concept of “psychological distance” (PD), according to which more immediate events are seen as “closer in time,” more certain events as “closer in probability,” and events with greater potential for personal harm as “socially closer.” Adopting this concept, we examined how distant, in terms of PD, people perceive environmental threats to be. Using a structural equations model, we measured how PD influences environmental threat perception. In a sample of 305 Israeli students who completed a computerized questionnaire, we found that environmental threats were perceived as psychologically distant in all of the PD dimensions, and that PD strongly affected perceived severity of environmental threats and willingness to engage in pro-environmental behavior. The reasons for the psychological remoteness of environmental threats and possible approaches to cope with its implications are discussed.  相似文献   

5.
In temporal ventriloquism, auditory events can illusorily attract perceived timing of a visual onset [1-3]. We investigated whether timing of a static sound can also influence spatio-temporal processing of visual apparent motion, induced here by visual bars alternating between opposite hemifields. Perceived direction typically depends on the relative interval in timing between visual left-right and right-left flashes (e.g., rightwards motion dominating when left-to-right interflash intervals are shortest [4]). In our new multisensory condition, interflash intervals were equal, but auditory beeps could slightly lag the right flash, yet slightly lead the left flash, or vice versa. This auditory timing strongly influenced perceived visual motion direction, despite providing no spatial auditory motion signal whatsoever. Moreover, prolonged adaptation to such auditorily driven apparent motion produced a robust visual motion aftereffect in the opposite direction, when measured in subsequent silence. Control experiments argued against accounts in terms of possible auditory grouping, or possible attention capture. We suggest that the motion arises because the sounds change perceived visual timing, as we separately confirmed. Our results provide a new demonstration of multisensory influences on sensory-specific perception [5], with timing of a static sound influencing spatio-temporal processing of visual motion direction.  相似文献   

6.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

7.

Background

The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception.

Methodology/Principal Findings

Here, we tested the hypothesis that visual sensitivity - rather than only the perceived duration of visual stimuli - can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d’) for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60–96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60–96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants.

Conclusions/Significance

Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does.  相似文献   

8.
Performing actions with sensory consequences modifies physiological and behavioral responses relative to otherwise identical sensory input perceived in a passive manner. It is assumed that such modifications occur through an efference copy sent from motor cortex to sensory regions during performance of voluntary actions. In the auditory domain most behavioral studies report attenuated perceived loudness of self-generated auditory action-consequences. However, several recent behavioral and physiological studies report enhanced responses to such consequences. Here we manipulated the intensity of self-generated and externally-generated sounds and examined the type of perceptual modification (enhancement vs. attenuation) reported by healthy human subjects. We found that when the intensity of self-generated sounds was low, perceived loudness is enhanced. Conversely, when the intensity of self-generated sounds was high, perceived loudness is attenuated. These results might reconcile some of the apparent discrepancies in the reported literature and suggest that efference copies can adapt perception according to the differential sensory context of voluntary actions.  相似文献   

9.
An ability to accurately perceive and evaluate out-group members'' emotions plays a critical role in intergroup interactions. Here we showed that Chinese participants'' implicit attitudes toward White people bias their perception and judgment of emotional intensity of White people''s facial expressions such as anger, fear and sadness. We found that Chinese participants held pro-Chinese/anti-White implicit biases that were assessed in an evaluative implicit association test (IAT). Moreover, their implicit biases positively predicted the perceived intensity of White people''s angry, fearful and sad facial expressions but not for happy expressions. This study demonstrates that implicit racial attitudes can influence perception and judgment of a range of emotional expressions. Implications for intergroup interactions were discussed.  相似文献   

10.
Neural basis of the ventriloquist illusion   总被引:1,自引:0,他引:1  
The ventriloquist creates the illusion that his or her voice emerges from the visibly moving mouth of the puppet [1]. This well-known illusion exemplifies a basic principle of how auditory and visual information is integrated in the brain to form a unified multimodal percept. When auditory and visual stimuli occur simultaneously at different locations, the more spatially precise visual information dominates the perceived location of the multimodal event. Previous studies have examined neural interactions between spatially disparate auditory and visual stimuli [2-5], but none has found evidence for a visual influence on the auditory cortex that could be directly linked to the illusion of a shifted auditory percept. Here we utilized event-related brain potentials combined with event-related functional magnetic resonance imaging to demonstrate on a trial-by-trial basis that a precisely timed biasing of the left-right balance of auditory cortex activity by the discrepant visual input underlies the ventriloquist illusion. This cortical biasing may reflect a fundamental mechanism for integrating the auditory and visual components of environmental events, which ensures that the sounds are adaptively localized to the more reliable position provided by the visual input.  相似文献   

11.
Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions.  相似文献   

12.
Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more important for perceiving vertical shifts. In humans, functional imaging studies have shown that sound movement in the horizontal plane activates brain areas distinct from the primary auditory cortex, in parietal and frontal lobes and in the planum temporale. However, no previous work has examined activations for vertical sound movement. It is therefore difficult to generalize previous imaging studies, based on horizontal movement only, to multidimensional auditory space perception. Using externalized virtual-space sounds in a functional magnetic resonance imaging (fMRI) paradigm to investigate this, we compared vertical and horizontal shifts in sound location. A common bilateral network of brain areas was activated in response to both horizontal and vertical sound movement. This included the planum temporale, superior parietal cortex, and premotor cortex. Sounds perceived laterally in virtual space were associated with contralateral activation of the auditory cortex. These results demonstrate that sound movement in vertical and horizontal dimensions engages a common processing network in the human cerebral cortex and show that multidimensional spatial properties of sounds are processed at this level.  相似文献   

13.
This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.  相似文献   

14.
Facial expressions are important social communicators. In addition to communicating social information, the specific muscular movements of expressions may serve additional functional roles. For example, recalibration theory hypothesizes that the anger expression exaggerates facial cues of strength, an indicator of human fighting ability, to increase bargaining power in conflicts. Supporting this theory is evidence that faces displaying one element of an angry expression (e.g. lowered eyebrows) are perceived to be stronger than faces with opposite expression features (e.g. raised eyebrows for fear). The present study sought stronger evidence that more natural manipulations of facial anger also enhance perceived strength. We used expression aftereffects to bias perception of a neutral face towards anger and observed the effects on perceptions of strength. In addition, we tested the specificity of the strength-cue enhancing effect by examining whether two other expressions, fear and happy, also affected perceptions of strength. We found that, as predicted, a face biased to be perceived as angrier was rated as stronger compared to a baseline rating, whereas a face biased to be more fearful was rated as weaker, consistent with the purported function of fear as an act of submission. Interestingly, faces biased towards a happy expression were also perceived as stronger, though the effect was smaller than that for anger. Overall, the results supported the recalibration theory hypothesis that the anger expression enhances cues of strength to increase bargaining power in conflicts, but with some limitations regarding the specificity of the function to anger.  相似文献   

15.
For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.  相似文献   

16.
Tinnitus is the perception of sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. In experiment 1, we used a go/no-go paradigm to evaluate the target detection speed and the inhibitory control in tinnitus participants (TP) and control subjects (CS), both in unimodal and bimodal conditions in the auditory and visual modalities. We also tested whether the sound frequency used for target and distractors affected the performance. We observed that TP were slower and made more false alarms than CS in all unimodal auditory conditions. TP were also slower than CS in the bimodal conditions. In addition, when comparing the response times in bimodal and auditory unimodal conditions, the expected gain in bimodal conditions was present in CS, but not in TP when tinnitus-matched frequency sounds were used as targets. In experiment 2, we tested the sensitivity to cross-modal interference in TP during auditory and visual go/no-go tasks where each stimulus was preceded by an irrelevant pre-stimulus in the untested modality (e.g. high frequency auditory pre-stimulus in visual no/no-go condition). We observed that TP had longer response times than CS and made more false alarms in all conditions. In addition, the highest false alarm rate occurred in TP when tinnitus-matched/high frequency sounds were used as pre-stimulus. We conclude that the inhibitory control is altered in TP and that TP are abnormally sensitive to cross-modal interference, reflecting difficulties to ignore irrelevant stimuli. The fact that the strongest interference effect was caused by tinnitus-like auditory stimulation is consistent with the hypothesis according to which such stimulations generate emotional responses that affect cognitive processing in TP. We postulate that executive functions deficits play a key-role in the perception and maintenance of tinnitus.  相似文献   

17.
Our ability to detect target sounds in complex acoustic backgrounds is often limited not by the ear's resolution, but by the brain's information-processing capacity. The neural mechanisms and loci of this “informational masking” are unknown. We combined magnetoencephalography with simultaneous behavioral measures in humans to investigate neural correlates of informational masking and auditory perceptual awareness in the auditory cortex. Cortical responses were sorted according to whether or not target sounds were detected by the listener in a complex, randomly varying multi-tone background known to produce informational masking. Detected target sounds elicited a prominent, long-latency response (50–250 ms), whereas undetected targets did not. In contrast, both detected and undetected targets produced equally robust auditory middle-latency, steady-state responses, presumably from the primary auditory cortex. These findings indicate that neural correlates of auditory awareness in informational masking emerge between early and late stages of processing within the auditory cortex.  相似文献   

18.
Listeners consistently perceive approaching sounds to be closer than they actually are and perceptually underestimate the time to arrival of looming sound sources. In a natural environment, this underestimation results in more time than expected to evade or engage the source and affords a “margin of safety” that may provide a selective advantage. However, a key component in the proposed evolutionary origins of the perceptual bias is the appropriate timing of anticipatory motor behaviors. Here we show that listeners with poorer physical fitness respond sooner to looming sounds and with a larger margin of safety than listeners with better physical fitness. The anticipatory perceptual bias for looming sounds is negatively correlated with physical strength and positively correlated with recovery heart rate (a measure of aerobic fitness). The results suggest that the auditory perception of looming sounds may be modulated by the response capacity of the motor system.  相似文献   

19.
Our ability to detect target sounds in complex acoustic backgrounds is often limited not by the ear's resolution, but by the brain's information-processing capacity. The neural mechanisms and loci of this “informational masking” are unknown. We combined magnetoencephalography with simultaneous behavioral measures in humans to investigate neural correlates of informational masking and auditory perceptual awareness in the auditory cortex. Cortical responses were sorted according to whether or not target sounds were detected by the listener in a complex, randomly varying multi-tone background known to produce informational masking. Detected target sounds elicited a prominent, long-latency response (50–250 ms), whereas undetected targets did not. In contrast, both detected and undetected targets produced equally robust auditory middle-latency, steady-state responses, presumably from the primary auditory cortex. These findings indicate that neural correlates of auditory awareness in informational masking emerge between early and late stages of processing within the auditory cortex.  相似文献   

20.
This paper reviews the basic aspects of auditory processing that play a role in the perception of speech. The frequency selectivity of the auditory system, as measured using masking experiments, is described and used to derive the internal representation of the spectrum (the excitation pattern) of speech sounds. The perception of timbre and distinctions in quality between vowels are related to both static and dynamic aspects of the spectra of sounds. The perception of pitch and its role in speech perception are described. Measures of the temporal resolution of the auditory system are described and a model of temporal resolution based on a sliding temporal integrator is outlined. The combined effects of frequency and temporal resolution can be modelled by calculation of the spectro-temporal excitation pattern, which gives good insight into the internal representation of speech sounds. For speech presented in quiet, the resolution of the auditory system in frequency and time usually markedly exceeds the resolution necessary for the identification or discrimination of speech sounds, which partly accounts for the robust nature of speech perception. However, for people with impaired hearing, speech perception is often much less robust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号