首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When dealing with natural scenes, sensory systems have to process an often messy and ambiguous flow of information. A stable perceptual organization nevertheless has to be achieved in order to guide behavior. The neural mechanisms involved can be highlighted by intrinsically ambiguous situations. In such cases, bistable perception occurs: distinct interpretations of the unchanging stimulus alternate spontaneously in the mind of the observer. Bistable stimuli have been used extensively for more than two centuries to study visual perception. Here we demonstrate that bistable perception also occurs in the auditory modality. We compared the temporal dynamics of percept alternations observed during auditory streaming with those observed for visual plaids and the susceptibilities of both modalities to volitional control. Strong similarities indicate that auditory and visual alternations share common principles of perceptual bistability. The absence of correlation across modalities for subject-specific biases, however, suggests that these common principles are implemented at least partly independently across sensory modalities. We propose that visual and auditory perceptual organization could rely on distributed but functionally similar neural competition mechanisms aimed at resolving sensory ambiguities.  相似文献   

2.
This special issue presents research concerning multistable perception in different sensory modalities. Multistability occurs when a single physical stimulus produces alternations between different subjective percepts. Multistability was first described for vision, where it occurs, for example, when different stimuli are presented to the two eyes or for certain ambiguous figures. It has since been described for other sensory modalities, including audition, touch and olfaction. The key features of multistability are: (i) stimuli have more than one plausible perceptual organization; (ii) these organizations are not compatible with each other. We argue here that most if not all cases of multistability are based on competition in selecting and binding stimulus information. Binding refers to the process whereby the different attributes of objects in the environment, as represented in the sensory array, are bound together within our perceptual systems, to provide a coherent interpretation of the world around us. We argue that multistability can be used as a method for studying binding processes within and across sensory modalities. We emphasize this theme while presenting an outline of the papers in this issue. We end with some thoughts about open directions and avenues for further research.  相似文献   

3.
Integrating information across sensory domains to construct a unified representation of multi-sensory signals is a fundamental characteristic of perception in ecological contexts. One provocative hypothesis deriving from neurophysiology suggests that there exists early and direct cross-modal phase modulation. We provide evidence, based on magnetoencephalography (MEG) recordings from participants viewing audiovisual movies, that low-frequency neuronal information lies at the basis of the synergistic coordination of information across auditory and visual streams. In particular, the phase of the 2–7 Hz delta and theta band responses carries robust (in single trials) and usable information (for parsing the temporal structure) about stimulus dynamics in both sensory modalities concurrently. These experiments are the first to show in humans that a particular cortical mechanism, delta-theta phase modulation across early sensory areas, plays an important “active” role in continuously tracking naturalistic audio-visual streams, carrying dynamic multi-sensory information, and reflecting cross-sensory interaction in real time.  相似文献   

4.
Neuronal responses to ongoing stimulation in many systems change over time, or “adapt.” Despite the ubiquity of adaptation, its effects on the stimulus information carried by neurons are often unknown. Here we examine how adaptation affects sensory coding in barrel cortex. We used spike-triggered covariance analysis of single-neuron responses to continuous, rapidly varying vibrissa motion stimuli, recorded in anesthetized rats. Changes in stimulus statistics induced spike rate adaptation over hundreds of milliseconds. Vibrissa motion encoding changed with adaptation as follows. In every neuron that showed rate adaptation, the input–output tuning function scaled with the changes in stimulus distribution, allowing the neurons to maintain the quantity of information conveyed about stimulus features. A single neuron that did not show rate adaptation also lacked input–output rescaling and did not maintain information across changes in stimulus statistics. Therefore, in barrel cortex, rate adaptation occurs on a slow timescale relative to the features driving spikes and is associated with gain rescaling matched to the stimulus distribution. Our results suggest that adaptation enhances tactile representations in primary somatosensory cortex, where they could directly influence perceptual decisions.  相似文献   

5.
Implicit multisensory associations influence voice recognition   总被引:4,自引:1,他引:3       下载免费PDF全文
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.  相似文献   

6.
Animals must continuously evaluate sensory information to select the preferable among possible actions in a given context, including the option to wait for more information before committing to another course of action. In experimental sensory decision tasks that replicate these features, reaction time distributions can be informative about the implicit rules by which animals determine when to commit and what to do. We measured reaction times of Long-Evans rats discriminating the direction of motion in a coherent random dot motion stimulus, using a self-paced two-alternative forced-choice (2-AFC) reaction time task. Our main findings are: (1) When motion strength was constant across trials, the error trials had shorter reaction times than correct trials; in other words, accuracy increased with response latency. (2) When motion strength was varied in randomly interleaved trials, accuracy increased with motion strength, whereas reaction time decreased. (3) Accuracy increased with reaction time for each motion strength considered separately, and in the interleaved motion strength experiment overall. (4) When stimulus duration was limited, accuracy improved with stimulus duration, whereas reaction time decreased. (5) Accuracy decreased with response latency after stimulus offset. This was the case for each stimulus duration considered separately, and in the interleaved duration experiment overall. We conclude that rats integrate visual evidence over time, but in this task the time of their response is governed more by elapsed time than by a criterion for sufficient evidence.  相似文献   

7.
In 1935 Edwin Boring proposed that each attribute of sensation reflects the activity of a different neural circuit. If this idea is valid, it could facilitate both psychophysical and neurophysiological research on sensory systems. We think it likely that Boring's formulation is correct for three reasons: 1) Different sensory attributes reflect conscious information about different parameters of a stimulus. To be measured by any device, each of these parameters must be individually computed. Different neural circuits would appear to be necessary for the nervous system to carry out these different computations. 2) Perceived information about different sensory attributes can be made to diverge by appropriate manipulations of the stimuli. If there is a rigorous relationship between conscious sensory experience and neural activity, such a divergence implies that different sensory attributes are served by different neural circuits. 3) Accurate information about a sensory attribute requires that a human observer's attention be focused on that attribute. Changes in direction of attention are thought to involve a process of switching from one neural circuit to another, and provide another way to cause perceived information about different sensory attributes to diverge.  相似文献   

8.
The motion aftereffect may be considered as a consequence of visual illusions of self-motion (vection) and the persistence of sensory information processing. There is ample experimental evidence indicating a uniformity of mechanisms that underlie motion aftereffects in different modalities based on the principle of motion detectors. Currently, there is firm ground to believe that the motion aftereffect is intrinsic to all sensory systems involved in spatial orientation, that motion adaptation in one sensory system elicits changes in another one, and that such adaptation is of great adaptive importance for spatial orientation and motion of an organism. This review seeks to substantiate these ideas.  相似文献   

9.
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual-auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.  相似文献   

10.
The electrosensory and mechanosensory lateral line systems of fish exhibit many common features in their structural and functional organization, both at the sensory periphery as well as in central processing pathways. These two sensory systems also appear to play similar roles in many behavioral tasks such as prey capture, orientation with respect to external environmental cues, navigation in low-light conditions, and mediation of interactions with nearby animals. In this paper, we briefly review key morphological, physiological, and behavioral aspects of these two closely related sensory systems. We present arguments that the information processing demands associated with spatial processing are likely to be quite similar, due largely to the spatial organization of both systems and the predominantly dipolar nature of many electrosensory and mechanosensory stimulus fields. Demands associated with temporal processing may be quite different, however, due primarily to differences in the physical bases of electrosensory and mechanosensory stimuli (e.g. speed of transmission). With a better sense of the information processing requirements, we turn our attention to an analysis of the functional organization of the associated first-order sensory nuclei in the hindbrain, including the medial octavolateral nucleus (MON), dorsal octavolateral nucleus (DON), and electrosensory lateral line lobe (ELL). One common feature of these systems is a set of neural mechanisms for improving signal-to-noise ratios, including mechanisms for adaptive suppression of reafferent signals. This comparative analysis provides new insights into how the nervous system extracts biologically significant information from dipolar stimulus fields in order to solve a variety of behaviorally relevant problems faced by aquatic animals.  相似文献   

11.
Perception relies on the response of populations of neurons in sensory cortex. How the response profile of a neuronal population gives rise to perception and perceptual discrimination has been conceptualized in various ways. Here we suggest that neuronal population responses represent information about our environment explicitly as Fisher information (FI), which is a local measure of the variance estimate of the sensory input. We show how this sensory information can be read out and combined to infer from the available information profile which stimulus value is perceived during a fine discrimination task. In particular, we propose that the perceived stimulus corresponds to the stimulus value that leads to the same information for each of the alternative directions, and compare the model prediction to standard models considered in the literature (population vector, maximum likelihood, maximum-a-posteriori Bayesian inference). The models are applied to human performance in a motion discrimination task that induces perceptual misjudgements of a target direction of motion by task irrelevant motion in the spatial surround of the target stimulus (motion repulsion). By using the neurophysiological insight that surround motion suppresses neuronal responses to the target motion in the center, all models predicted the pattern of perceptual misjudgements. The variation of discrimination thresholds (error on the perceived value) was also explained through the changes of the total FI content with varying surround motion directions. The proposed FI decoding scheme incorporates recent neurophysiological evidence from macaque visual cortex showing that perceptual decisions do not rely on the most active neurons, but rather on the most informative neuronal responses. We statistically compare the prediction capability of the FI decoding approach and the standard decoding models. Notably, all models reproduced the variation of the perceived stimulus values for different surrounds, but with different neuronal tuning characteristics underlying perception. Compared to the FI approach the prediction power of the standard models was based on neurons with far wider tuning width and stronger surround suppression. Our study demonstrates that perceptual misjudgements can be based on neuronal populations encoding explicitly the available sensory information, and provides testable neurophysiological predictions on neuronal tuning characteristics underlying human perceptual decisions.  相似文献   

12.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

13.
Kropp M  Gabbiani F  Prank K 《Systems biology》2005,152(4):263-268
The ubiquitous Ca2(+)-phosphoinositide pathway transduces extracellular signals to cellular effectors. Using a mathematical model, we simulated intracellular Ca2+ fluctuations in hepatocytes upon humoral stimulation. We estimated the information encoded about random humoral stimuli in these Ca2+ spike trains using an information-theoretic approach based on stimulus estimation methods. We demonstrate accurate transfer of information about random humoral signals with low temporal cutoff frequencies. In contrast, our results suggest that high-frequency stimuli are poorly transduced by the transmembrane machinery. We found that humoral signals are encoded in both the timing and amplitude of intracellular Ca2+ spikes. The information transmitted per spike is similar to that of sensory neuronal systems, in spite of several orders of magnitude difference in firing rate.  相似文献   

14.
Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.  相似文献   

15.
16.
In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.  相似文献   

17.
Knutsen PM  Biess A  Ahissar E 《Neuron》2008,59(1):35-42
Perception is usually an active process by which action selects and affects sensory information. During rodent active touch, whisker kinematics influences how objects activate sensory receptors. In order to fully characterize whisker motion, we reconstructed whisker position in 3D and decomposed whisker motion to all its degrees of freedom. We found that, across behavioral modes, in both head-fixed and freely moving rats, whisker motion is characterized by translational movements and three rotary components: azimuth, elevation, and torsion. Whisker torsion, which has not previously been described, was large (up to 100 degrees), and torsional angles were highly correlated with whisker azimuths. The coupling of azimuth and torsion was consistent across whisking epochs and rats and was similar along rows but systematically varied across rows such that rows A and E counterrotated. Torsional rotation of the whiskers enables contact information to be mapped onto the circumference of the whisker follicles in a predictable manner across protraction-retraction cycles.  相似文献   

18.
Scene analysis, the process of converting sensory information from peripheral receptors into a representation of objects in the external world, is central to our human experience of perception. Through our efforts to design systems for object recognition and for robot navigation, we have come to appreciate that a number of common themes apply across the sensory modalities of vision, audition, and olfaction; and many apply across species ranging from invertebrates to mammals. These themes include the need for adaptation in the periphery and trade-offs between selectivity for frequency or molecular structure with resolution in time or space. In addition, neural mechanisms involving coincidence detection are found in many different subsystems that appear to implement cross-correlation or autocorrelation computations.  相似文献   

19.
Multimodal neuronal maps, combining input from two or more sensory systems, play a key role in the processing of sensory and motor information. For such maps to be of any use, the input from all participating modalities must be calibrated so that a stimulus at a specific spatial location is represented at an unambiguous position in the multimodal map. Here we discuss two methods based on supervised spike-timing-dependent plasticity (STDP) to gauge input from different sensory modalities so as to ensure a proper map alignment. The first uses an excitatory teacher input. It is therefore called excitation-mediated learning. The second method is based on an inhibitory teacher signal, as found in the barn owl, and is called inhibition-mediated learning. Using detailed analytical calculations and numerical simulations, we demonstrate that inhibitory teacher input is essential if high-quality multimodal integration is to be learned rapidly. Furthermore, we show that the quality of the resulting map is not so much limited by the quality of the teacher signal but rather by the accuracy of the input from other sensory modalities.  相似文献   

20.
There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号