首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Temporal information is often contained in multi-sensory stimuli, but it is currently unknown how the brain combines e.g. visual and auditory cues into a coherent percept of time. The existing studies of cross-modal time perception mainly support the "modality appropriateness hypothesis", i.e. the domination of auditory temporal cues over visual ones because of the higher precision of audition for time perception. However, these studies suffer from methodical problems and conflicting results. We introduce a novel experimental paradigm to examine cross-modal time perception by combining an auditory time perception task with a visually guided motor task, requiring participants to follow an elliptic movement on a screen with a robotic manipulandum. We find that subjective duration is distorted according to the speed of visually observed movement: The faster the visual motion, the longer the perceived duration. In contrast, the actual execution of the arm movement does not contribute to this effect, but impairs discrimination performance by dual-task interference. We also show that additional training of the motor task attenuates the interference, but does not affect the distortion of subjective duration. The study demonstrates direct influence of visual motion on auditory temporal representations, which is independent of attentional modulation. At the same time, it provides causal support for the notion that time perception and continuous motor timing rely on separate mechanisms, a proposal that was formerly supported by correlational evidence only. The results constitute a counterexample to the modality appropriateness hypothesis and are best explained by Bayesian integration of modality-specific temporal information into a centralized "temporal hub".  相似文献   

2.
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

A combination of psychophysics, computational modelling and fMRI reveals novel insights into how the brain controls the binding of information across the senses, such as the voice and lip movements of a speaker.  相似文献   

3.
Abstract

The purpose of this study was to determine whether the rhythmic movements or cues enhance the anticipatory postural adjustment (APA) of gait initiation. Healthy humans initiated gait in response to an auditory start cue (third cue). A first auditory cue was given 8?s before the start cue, and a second auditory cue was given 3?s before the start cue. The participants performed the rhythmic medio-lateral weight shift (ML-WS session), rhythmic anterior-posterior weight shift (AP-WS session), or rhythmic arm swing (arm swing session) in the time between the first and second cues. In the rhythmic cues session, rhythmic auditory cues with a frequency of 1?Hz were given in this time. In the stationary session, the participants maintained stationary stance in this time. The APA and initial step movement preceded by those rhythmic movements or cues were compared with those in the stationary session. The temporal characteristics of the initial step movement of the gait initiation were not changed by the rhythmic movements or cues. The medio-lateral displacement of the APA in the ML-WS and arm swing sessions was significantly greater than that in the stationary session. The anterior–posterior displacement of the APA in the rhythmic cues and arm swing sessions was significantly greater than that in the stationary session. Taken together, the rhythmic movements and cues enhance the APA of gait initiation. The present finding may be a clue or motive for the future investigation for using rhythmic movements or cues as the preparatory activity to enlarge the small APA of gait initiation in the patients with Parkinson’s disease.  相似文献   

4.
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.  相似文献   

5.
When correlation implies causation in multisensory integration   总被引:1,自引:0,他引:1  
Inferring which signals have a common underlying cause, and hence should be integrated, represents a primary challenge for a perceptual system dealing with multiple sensory inputs [1-3]. This challenge is often referred to as the correspondence problem or causal inference. Previous research has demonstrated that spatiotemporal cues, along with prior knowledge, are exploited by the human brain to solve this problem [4-9]. Here we explore the role of correlation between the fine temporal structure of auditory and visual signals in causal inference. Specifically, we investigated whether correlated signals are inferred to originate from the same distal event and hence are integrated optimally [10]. In a localization task with visual, auditory, and combined audiovisual targets, the improvement in precision for combined relative to unimodal targets was statistically optimal only when audiovisual signals were correlated. This result demonstrates that humans use the similarity in the temporal structure of multisensory signals to solve the correspondence problem, hence inferring causation from correlation.  相似文献   

6.
Complex self-motion stimulations in the dark can be powerfully disorienting and can create illusory motion percepts. In the absence of visual cues, the brain has to use angular and linear acceleration information provided by the vestibular canals and the otoliths, respectively. However, these sensors are inaccurate and ambiguous. We propose that the brain processes these signals in a statistically optimal fashion, reproducing the rules of Bayesian inference. We also suggest that this processing is related to the statistics of natural head movements. This would create a perceptual bias in favour of low velocity and acceleration. We have constructed a Bayesian model of self-motion perception based on these assumptions. Using this model, we have simulated perceptual responses to centrifugation and off-vertical axis rotation and obtained close agreement with experimental findings. This demonstrates how Bayesian inference allows to make a quantitative link between sensor noise and ambiguities, statistics of head movement, and the perception of self-motion.  相似文献   

7.
Sensory cues in the environment can predict the availability of reward. Through experience, humans and animals learn these predictions and use them to guide their actions. For example, we can learn to discriminate chanterelles from ordinary champignons through experience. Assuming the development of a taste for the complex and lingering flavors of chanterelles, we therefore learn to value the same action--picking mushrooms--differentially depending upon the appearance of a mushroom. One major goal of cognitive neuroscience is to understand the neural mechanisms that underlie this sort of learning. Because the acquisition of rewards motivates much behavior, recent efforts have focused on describing the neural signals related to learning the value of stimuli and actions. Neurons in the basal ganglia, in midbrain dopamine areas, in frontal and parietal cortices and in other brain areas, all modulate their activity in relation to aspects of learning. By training monkeys on various behavioral tasks, recent studies have begun to characterize how neural signals represent distinct processes, such as the timing of events, motivation, absolute (objective) and relative (subjective) valuation, and the formation of associative links between stimuli and potential actions. In addition, a number of studies have either further characterized dopamine signals or sought to determine how such signaling might interact with target structures, such as the striatum and rhinal cortex, to underlie learning.  相似文献   

8.
We often need to learn how to move based on a single performance measure that reflects the overall success of our movements. However, movements have many properties, such as their trajectories, speeds and timing of end-points, thus the brain needs to decide which properties of movements should be improved; it needs to solve the credit assignment problem. Currently, little is known about how humans solve credit assignment problems in the context of reinforcement learning. Here we tested how human participants solve such problems during a trajectory-learning task. Without an explicitly-defined target movement, participants made hand reaches and received monetary rewards as feedback on a trial-by-trial basis. The curvature and direction of the attempted reach trajectories determined the monetary rewards received in a manner that can be manipulated experimentally. Based on the history of action-reward pairs, participants quickly solved the credit assignment problem and learned the implicit payoff function. A Bayesian credit-assignment model with built-in forgetting accurately predicts their trial-by-trial learning.  相似文献   

9.
In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signals encoded by each ear. Parameters for the optimal Bayesian model are determined based on psychoacoustic discrimination experiments on interaural time difference and sound intensity. Without regard as to how the human auditory system actually processes information, we examine the best possible localization performance that could be achieved based only on analysis of the input information, given the constraints of the normal auditory system. We show that the model performance is generally in good agreement with the actual human localization performance, as assessed in a meta-analysis of many localization experiments (Best et al. in Principles and applications of spatial hearing, pp 14–23. World Scientific Publishing, Singapore, 2011). We believe this approach can shed new light on the optimality (or otherwise) of human sound localization, especially with regard to the level of uncertainty in the input information. Moreover, the proposed model allows one to study the relative importance of various (combinations of) acoustic cues for spatial localization and enables a prediction of which cues are most informative and therefore likely to be used by humans in various circumstances.  相似文献   

10.
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.  相似文献   

11.
Uncertainty, neuromodulation, and attention   总被引:10,自引:0,他引:10  
Yu AJ  Dayan P 《Neuron》2005,46(4):681-692
Uncertainty in various forms plagues our interactions with the environment. In a Bayesian statistical framework, optimal inference and prediction, based on unreliable observations in changing contexts, require the representation and manipulation of different forms of uncertainty. We propose that the neuromodulators acetylcholine and norepinephrine play a major role in the brain's implementation of these uncertainty computations. Acetylcholine signals expected uncertainty, coming from known unreliability of predictive cues within a context. Norepinephrine signals unexpected uncertainty, as when unsignaled context switches produce strongly unexpected observations. These uncertainty signals interact to enable optimal inference and learning in noisy and changeable environments. This formulation is consistent with a wealth of physiological, pharmacological, and behavioral data implicating acetylcholine and norepinephrine in specific aspects of a range of cognitive processes. Moreover, the model suggests a class of attentional cueing tasks that involve both neuromodulators and shows how their interactions may be part-antagonistic, part-synergistic.  相似文献   

12.
There has been much interest in understanding the evolution of social learning. Investigators have tried to understand when natural selection will favor individuals who imitate others, how imitators should deal with the fact that available models may exhibit different behaviors, and how social and individual learning should interact. In all of this work, social learning and individual learning have been treated as alternative, conceptually distinct processes. Here we present a Bayesian model in which both individual and social learning arise from a single inferential process. Individuals use Bayesian inference to combine social and nonsocial cues about the current state of the environment. This model indicates that natural selection favors individuals who place heavy weight on social cues when the environment changes slowly or when its state cannot be well predicted using nonsocial cues. It also indicates that a conformist bias should be a universal aspect of social learning.  相似文献   

13.
We often perform movements and actions on the basis of internal motivations and without any explicit instructions or cues. One common example of such behaviors is our ability to initiate movements solely on the basis of an internally generated sense of the passage of time. In order to isolate the neuronal signals responsible for such timed behaviors, we devised a task that requires nonhuman primates to move their eyes consistently at regular time intervals in the absence of any external stimulus events and without an immediate expectation of reward. Despite the lack of sensory information, we found that animals were remarkably precise and consistent in timed behaviors, with standard deviations on the order of 100 ms. To examine the potential neural basis of this precision, we recorded from single neurons in the lateral intraparietal area (LIP), which has been implicated in the planning and execution of eye movements. In contrast to previous studies that observed a build-up of activity associated with the passage of time, we found that LIP activity decreased at a constant rate between timed movements. Moreover, the magnitude of activity was predictive of the timing of the impending movement. Interestingly, this relationship depended on eye movement direction: activity was negatively correlated with timing when the upcoming saccade was toward the neuron''s response field and positively correlated when the upcoming saccade was directed away from the response field. This suggests that LIP activity encodes timed movements in a push-pull manner by signaling for both saccade initiation towards one target and prolonged fixation for the other target. Thus timed movements in this task appear to reflect the competition between local populations of task relevant neurons rather than a global timing signal.  相似文献   

14.
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant''s vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”.  相似文献   

15.
Diverse a nimal species use multimodal communica tion signals to coordina te reproductive behavior.Despite active research in this field,the brain mechanisms underlying multimodal communication remain poorly understood.Similar to humans and many mammalian species,anurans often produce auditory signals accompanied by conspicuous visual cues(e.g.,vocal sac inflation).In this study,we used video playbacks to determine the role of vocal-sac inflation in little torrent frogs(Amolops torrentis).Then we exposed females to blank,visual,auditory,and audiovisual stimuli and analyzed whole brain tissue gene expression changes using RNAseq.The results showed that both auditory cues(i.e.,male advertisement calls)and visual cues were attractive to female frogs,although auditory cues were more attractive than visual cues.Females preferred simultaneous bimodal cues to unimodal cues.The hierarchical clustering of differentially expressed genes showed a close relationship between neurogenomic states and momentarily expressed sexual signals.We also found that the Gene Ontology terms and KEGG pathways involved in energy metabolism were mostly increased in blank contrast versus visual,acoustic,or audiovisual stimuli,indicating that brain energy use may play an important role in response to these stimuli.In sum,behavioral and neurogenomic responses to acoustic and visual cues are correlated in female little torrent frogs.  相似文献   

16.
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.  相似文献   

17.
The auditory systems of humans and many other species use the difference in the time of arrival of acoustic signals at the two ears to compute the lateral position of sound sources. This computation is assumed to initially occur in an assembly of neurons organized along a frequency-by-delay surface. Mathematically, the computations are equivalent to a two-dimensional cross-correlation of the input signals at the two ears, with the position of the peak activity along this surface designating the position of the source in space. In this study, partially correlated signals to the two ears are used to probe the mechanisms for encoding spatial cues in stationary or dynamic (moving) signals. It is demonstrated that a cross-correlation model of the auditory periphery coupled with statistical decision theory can predict the patterns of performance by human subjects for both stationary and motion stimuli as a function of stimulus decorrelation. Implications of these findings for the existence of a unique cortical motion system are discussed.  相似文献   

18.
The aim of this study was to verify the contribution of haptic and auditory cues in the quick discrimination of an object mass. Ten subjects had to brake with the right hand the movement of a cup due to the falling impact of an object that could be of two different masses. They were asked to perform a quick left hand movement if the object was of the prescribed mass according to the proprioceptive and auditory cues they received from object contact with the cup and did not react to the other object. Three conditions were established: with both proprioceptive and auditory cues, only with proprioceptive cue or only with an auditory cue. When proprioceptive information was available subjects advanced responses time to the impact of the heavy object as compared with that of the light object. The addition of an auditory cue did not improve the advancement for the heavy object. We conclude that when a motor response has to be chosen according to different combinations of auditory and proprioceptive load-related information, subjects used mainly haptic information to fast respond and that auditory cues do not add relevant information that could ameliorate the quickness of a correct response.  相似文献   

19.
Rigoulot S  Pell MD 《PloS one》2012,7(1):e30740
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.  相似文献   

20.
Capturing nature’s statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl’s midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号