首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
This article addresses the intersection between perceptual estimates of head motion based on purely vestibular and purely visual sensation, by considering how nonvisual (e.g. vestibular and proprioceptive) sensory signals for head and eye motion can be combined with visual signals available from a single landmark to generate a complete perception of self-motion. In order to do this, mathematical dimensions of sensory signals and perceptual parameterizations of self-motion are evaluated, and equations for the sensory-to-perceptual transition are derived. With constant velocity translation and vision of a single point, it is shown that visual sensation allows only for the externalization, to the frame of reference given by the landmark, of an inertial self-motion estimate from nonvisual signals. However, it is also shown that, with nonzero translational acceleration, use of simple visual signals provides a biologically plausible strategy for integration of inertial acceleration sensation, to recover translational velocity. A dimension argument proves similar results for horizontal flow of any number of discrete visible points. The results provide insight into the convergence of visual and vestibular sensory signals for self-motion and indicate perceptual algorithms by which primitive visual and vestibular signals may be integrated for self-motion perception.  相似文献   

2.
A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm – that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments – involving cues across and within modalities – for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new ‘oddity detection’ paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain.  相似文献   

3.
Subliminal perception studies have shown that one can objectively discriminate a stimulus without subjectively perceiving it. We show how a minimalist framework based on Signal Detection Theory and Bayesian inference can account for this dissociation, by describing subjective and objective tasks with similar decision-theoretic mechanisms. Each of these tasks relies on distinct response classes, and therefore distinct priors and decision boundaries. As a result, they may reach different conclusions. By formalizing, within the same framework, forced-choice discrimination responses, subjective visibility reports and confidence ratings, we show that this decision model suffices to account for several classical characteristics of conscious and unconscious perception. Furthermore, the model provides a set of original predictions on the nonlinear profiles of discrimination performance obtained at various levels of visibility. We successfully test one such prediction in a novel experiment: when varying continuously the degree of perceptual ambiguity between two visual symbols presented at perceptual threshold, identification performance varies quasi-linearly when the stimulus is unseen and in an ‘all-or-none’ manner when it is seen. The present model highlights how conscious and non-conscious decisions may correspond to distinct categorizations of the same stimulus encoded by a high-dimensional neuronal population vector.  相似文献   

4.
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants'' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.  相似文献   

5.
Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates—slow (45°/s) and fast (180°/s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one’s overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing “standard” model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model.  相似文献   

6.
Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input.  相似文献   

7.
By formulating Helmholtz's ideas about perception, in terms of modern-day theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts: using constructs from statistical physics, the problems of inferring the causes of sensory input and learning the causal structure of their generation can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organisation and responses. In this paper, we show these perceptual processes are just one aspect of emergent behaviours of systems that conform to a free energy principle. The free energy considered here measures the difference between the probability distribution of environmental quantities that act on the system and an arbitrary distribution encoded by its configuration. The system can minimise free energy by changing its configuration to affect the way it samples the environment or change the distribution it encodes. These changes correspond to action and perception respectively and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment assumes that the system's state and structure encode an implicit and probabilistic model of the environment. We will look at the models entailed by the brain and how minimisation of its free energy can explain its dynamics and structure.  相似文献   

8.
Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks.  相似文献   

9.
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.  相似文献   

10.
Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception whenever one perceptual interpretation is dominant, and the instability of perception that causes perceptual dominance to alternate between perceptual interpretations upon extended viewing. This review summarizes several ways in which contextual information can help the brain resolve visual ambiguities and construct temporarily stable perceptual experiences. Temporal context through prior stimulation or internal brain states brought about by feedback from higher cortical processing levels may alter the response characteristics of specific neurons involved in rivalry resolution. Furthermore, spatial or crossmodal context may strengthen the neuronal representation of one of the possible perceptual interpretations and consequently bias the rivalry process towards it. We suggest that contextual influences on perceptual choices with ambiguous visual stimuli can be highly informative about the neuronal mechanisms of context-driven inference in the general processes of perceptual decision-making.  相似文献   

11.
We have previously tried to explain perceptual inference and learning under a free-energy principle that pursues Helmholtz’s agenda to understand the brain in terms of energy minimization. It is fairly easy to show that making inferences about the causes of sensory data can be cast as the minimization of a free-energy bound on the likelihood of sensory inputs, given an internal model of how they were caused. In this article, we consider what would happen if the data themselves were sampled to minimize this bound. It transpires that the ensuing active sampling or inference is mandated by ergodic arguments based on the very existence of adaptive agents. Furthermore, it accounts for many aspects of motor behavior; from retinal stabilization to goal-seeking. In particular, it suggests that motor control can be understood as fulfilling prior expectations about proprioceptive sensations. This formulation can explain why adaptive behavior emerges in biological agents and suggests a simple alternative to optimal control theory. We illustrate these points using simulations of oculomotor control and then apply to same principles to cued and goal-directed movements. In short, the free-energy formulation may provide an alternative perspective on the motor control that places it in an intimate relationship with perception.  相似文献   

12.
Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance). Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom-up interactions, for example, in other sensory modalities.  相似文献   

13.
A theory of cortical responses   总被引:22,自引:0,他引:22  
This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts.It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain's free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain's attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. The aim of this article is to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective.In terms of cortical architectures, the theoretical treatment predicts that sensory cortex should be arranged hierarchically, that connections should be reciprocal and that forward and backward connections should show a functional asymmetry (forward connections are driving, whereas backward connections are both driving and modulatory). In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology, it accounts for classical and extra classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena such as repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, for example, priming and global precedence. The final focus of this article is on perceptual learning as measured with the MMN and the implications for empirical studies of coupling among cortical areas using evoked sensory responses.  相似文献   

14.
It is still an enigma how human subjects combine visual and vestibular inputs for their self-motion perception. Visual cues have the benefit of high spatial resolution but entail the danger of self motion illusions. We performed psychophysical experiments (verbal estimates as well as pointer indications of perceived self-motion in space) in normal subjects (Ns) and patients with loss of vestibular function (Ps). Subjects were presented with horizontal sinusoidal rotations of an optokinetic pattern (OKP) alone (visual stimulus; 0.025-3.2 Hz; displacement amplitude, 8 degrees) or in combinations with rotations of a Bárány chair (vestibular stimulus; 0.025-0.4 Hz; +/- 8 degrees). We found that specific instructions to the subjects created different perceptual states in which their self-motion perception essentially reflected three processing steps during pure visual stimulation: i) When Ns were primed by a procedure based on induced motion and then they estimated perceived self-rotation upon pure optokinetic stimulation (circular vection, CV), the CV has a gain close to unity up to frequencies of almost 0.8 Hz, followed by a sharp decrease at higher frequencies (i.e., characteristics resembling those of the optokinetic reflex, OKR, and of smooth pursuit, SP). ii) When Ns were instructed to "stare through" the optokinetic pattern, CV was absent at high frequency, but increasingly developed as frequency was decreased below 0.1 Hz. iii) When Ns "looked at" the optokinetic pattern (accurately tracked it with their eyes) CV was usually absent, even at low frequency. CV in Ps showed similar dynamics as in Ns in condition i), independently of the instruction. During vestibular stimulation, self-motion perception in Ns fell from a maximum at 0.4 Hz to zero at 0.025 Hz. When vestibular stimulation was combined with visual stimulation while Ns "stared through" OKP, perception at low frequencies became modulated in magnitude. When Ns "looked" at OKP, this modulation was reduced, apart from the synergistic stimulus combination (OKP stationary) where magnitude was similar as during "staring". The obtained gain and phase curves of the perception were incompatible with linear systems prediction. We therefore describe the present findings by a non-linear dynamic model in which the visual input is processed in three steps: i) It shows dynamics similar to those of OKR and SP; ii) it is shaped to complement the vestibular dynamics and is fused with a vestibular signal by linear summation; and iii) it can be suppressed by a visual-vestibular conflict mechanism when the visual scene is moving in space. Finally, an important element of the model is a velocity threshold of about 1.2 degrees/s which is instrumental in maintaining perceptual stability and in explaining the observed dynamics of perception. We conclude from the experimental and theoretical evidence that self-motion perception normally is related to the visual scene as a reference, while the vestibular input is used to check the kinematic state of the scene; if the scene appears to move, the visual signal becomes suppressed and perception is based on the vestibular cue.  相似文献   

15.
Recent studies have shown that human perception of body ownership is highly malleable. A well-known example is the rubber hand illusion (RHI) wherein ownership over a dummy hand is experienced, and is generally believed to require synchronized stroking of real and dummy hands. Our goal was to elucidate the computational principles governing this phenomenon. We adopted the Bayesian causal inference model of multisensory perception and applied it to visual, proprioceptive, and tactile stimuli. The model reproduced the RHI, predicted that it can occur without tactile stimulation, and that synchronous stroking would enhance it. Various measures of ownership across two experiments confirmed the predictions: a large percentage of individuals experienced the illusion in the absence of any tactile stimulation, and synchronous stroking strengthened the illusion. Altogether, these findings suggest that perception of body ownership is governed by Bayesian causal inference—i.e., the same rule that appears to govern the perception of outside world.  相似文献   

16.
Gilet E  Diard J  Bessière P 《PloS one》2011,6(6):e20387
In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception-action loop, based on probabilistic modeling and bayesian inference, which we call the Bayesian Action-Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments.  相似文献   

17.
Perception is often characterized computationally as an inference process in which uncertain or ambiguous sensory inputs are combined with prior expectations. Although behavioral studies have shown that observers can change their prior expectations in the context of a task, robust neural signatures of task-specific priors have been elusive. Here, we analytically derive such signatures under the general assumption that the responses of sensory neurons encode posterior beliefs that combine sensory inputs with task-specific expectations. Specifically, we derive predictions for the task-dependence of correlated neural variability and decision-related signals in sensory neurons. The qualitative aspects of our results are parameter-free and specific to the statistics of each task. The predictions for correlated variability also differ from predictions of classic feedforward models of sensory processing and are therefore a strong test of theories of hierarchical Bayesian inference in the brain. Importantly, we find that Bayesian learning predicts an increase in so-called “differential correlations” as the observer’s internal model learns the stimulus distribution, and the observer’s behavioral performance improves. This stands in contrast to classic feedforward encoding/decoding models of sensory processing, since such correlations are fundamentally information-limiting. We find support for our predictions in data from existing neurophysiological studies across a variety of tasks and brain areas. Finally, we show in simulation how measurements of sensory neural responses can reveal information about a subject’s internal beliefs about the task. Taken together, our results reinterpret task-dependent sources of neural covariability as signatures of Bayesian inference and provide new insights into their cause and their function.  相似文献   

18.
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

A combination of psychophysics, computational modelling and fMRI reveals novel insights into how the brain controls the binding of information across the senses, such as the voice and lip movements of a speaker.  相似文献   

19.
Friston K 《Neuron》2011,72(3):488-498
This article poses a controversial question: is optimal control theory useful for understanding motor behavior or is it a misdirection? This question is becoming acute as people start to conflate internal models in motor control and perception (Poeppel et?al., 2008; Hickok et?al., 2011). However, the forward models in motor control are not the generative models used in perceptual inference. This Perspective tries to highlight the differences between internal models in motor control and perception and asks whether optimal control is the right way to think about things. The issues considered here may have broader implications for optimal decision theory and Bayesian approaches to learning and behavior in general.  相似文献   

20.
Approximate Bayesian computation (ABC) substitutes simulation for analytic models in Bayesian inference. Simulating evolutionary scenarios under Kimura’s stepping stone model (KSS) might therefore allow inference over spatial genetic process where analytical results are difficult to obtain. ABC first creates a reference set of simulations and would proceed by comparing summary statistics over KSS simulations to summary statistics from localities sampled in the field, but: comparison of which localities and stepping stones? Identical stepping stones can be arranged so two localities fall in the same stepping stone, nearest or diagonal neighbours, or without contact. None is intrinsically correct, yet some choice must be made and this affects inference. We explore a Bayesian strategy for mapping field observations onto discrete stepping stones. We make Sundial, for projecting field data onto the plane, available. We generalize KSS over regular tilings of the plane. We show Bayesian averaging over the mapping between a continuous field area and discrete stepping stones improves the fit between KSS and isolation by distance expectations. We make Tiler Durden available for carrying out this Bayesian averaging. We describe a novel parameterization of KSS based on Wright’s neighbourhood size, placing an upper bound on the geographic area represented by a stepping stone and make it available as m Vector. We generalize spatial coalescence recursions to continuous and discrete space cases and use these to numerically solve for KSS coalescence previously examined only using simulation. We thus provide applied and analytical resources for comparison of stepping stone simulations with field observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号