首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Responses of multisensory neurons to combinations of sensory cues are generally enhanced or depressed relative to single cues presented alone, but the rules that govern these interactions have remained unclear. We examined integration of visual and vestibular self-motion cues in macaque area MSTd in response to unimodal as well as congruent and conflicting bimodal stimuli in order to evaluate hypothetical combination rules employed by multisensory neurons. Bimodal responses were well fit by weighted linear sums of unimodal responses, with weights typically less than one (subadditive). Surprisingly, our results indicate that weights change with the relative reliabilities of the two cues: visual weights decrease and vestibular weights increase when visual stimuli are degraded. Moreover, both modulation depth and neuronal discrimination thresholds improve for matched bimodal compared to unimodal stimuli, which might allow for increased neural sensitivity during multisensory stimulation. These findings establish important new constraints for neural models of cue integration.  相似文献   

2.
Adaptation in sensory and neuronal systems usually leads to reduced responses to persistent or frequently presented stimuli. In contrast to simple fatigue, adapted neurons often retain their ability to encode changes in stimulus intensity and to respond when novel stimuli appear. We investigated how the level of adaptation of a fly visual motion-sensitive neuron affects its responses to discontinuities in the stimulus, i.e. sudden brief changes in one of the stimulus parameters (velocity, contrast, grating orientation and spatial frequency). Although the neuron''s overall response decreased gradually during ongoing motion stimulation, the response transients elicited by stimulus discontinuities were preserved or even enhanced with adaptation. Moreover, the enhanced sensitivity to velocity changes by adaptation was not restricted to a certain velocity range, but was present regardless of whether the neuron was adapted to a baseline velocity below or above its steady-state velocity optimum. Our results suggest that motion adaptation helps motion-sensitive neurons to preserve their sensitivity to novel stimuli even in the presence of strong tonic stimulation, for example during self-motion.  相似文献   

3.
The attentional modulation of sensory information processing in the visual system is the result of top-down influences, which can cause a multiplicative modulation of the firing rate of sensory neurons in extrastriate visual cortex, an effect reminiscent of the bottom-up effect of changes in stimulus contrast. This similarity could simply reflect the multiplicity of both effects. But, here we show that in direction-selective neurons in monkey visual cortical area MT, stimulus and attentional effects share a nonlinearity. These neurons show higher response gain for both contrast and attentional changes for intermediate contrast stimuli and smaller gain for low- and high-contrast stimuli. This finding suggests a close relationship between the neural encoding of stimulus contrast and the modulating effect of the behavioral relevance of stimuli.  相似文献   

4.
As we move through the world, information can be combined from multiple sources in order to allow us to perceive our self-motion. The vestibular system detects and encodes the motion of the head in space. In addition, extra-vestibular cues such as retinal-image motion (optic flow), proprioception, and motor efference signals, provide valuable motion cues. Here I focus on the coding strategies that are used by the brain to create neural representations of self-motion. I review recent studies comparing the thresholds of single versus populations of vestibular afferent and central neurons. I then consider recent advances in understanding the brain's strategy for combining information from the vestibular sensors with extra-vestibular cues to estimate self-motion. These studies emphasize the need to consider not only the rules by which multiple inputs are combined, but also how differences in the behavioral context govern the nature of what defines the optimal computation.  相似文献   

5.
Although many studies have shown that attention to a stimulus can enhance the responses of individual cortical sensory neurons, little is known about how attention accomplishes this change in response. Here, we propose that attention-based changes in neuronal responses depend on the same response normalization mechanism that adjusts sensory responses whenever multiple stimuli are present. We have implemented a model of attention that assumes that attention works only through this normalization mechanism, and show that it can replicate key effects of attention. The model successfully explains how attention changes the gain of responses to individual stimuli and also why modulation by attention is more robust and not a simple gain change when multiple stimuli are present inside a neuron''s receptive field. Additionally, the model accounts well for physiological data that measure separately attentional modulation and sensory normalization of the responses of individual neurons in area MT in visual cortex. The proposal that attention works through a normalization mechanism sheds new light a broad range of observations on how attention alters the representation of sensory information in cerebral cortex.  相似文献   

6.
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.  相似文献   

7.
An in vitro preparation consisting of the siphon, mantle, gill, and abdominal ganglion undergoes classical conditioning when a weak tactile stimulus (CS) applied to the siphon is paired with a strong tactile stimulus to the gill (UCS). When the stimuli are paired, the CS comes to evoke a gill withdrawal reflex (GWR) which increases in amplitude with training. Only when the stimuli are paired in a classical conditioning paradigm does the CS come to evoke a GWR. With classical conditioning training there is an alteration in the synaptic efficacy between central sensory neurons and central gill motor neurons. Moreover, these changes can be observed in sensory neurons not activated by the CS. The changes observed, as evidence by the number of action potentials evoked in the gill motor neuron do not completely parallel the observed behavioral changes. It is suggested that in addition to changes in the synaptic efficacy at the sensory-motor neuron synapse, other changes in neuronal activity occur at other loci which lead to the observed behavioral changes.  相似文献   

8.
The object of this study is to mathematically specify important characteristics of visual flow during translation of the eye for the perception of depth and self-motion. We address various strategies by which the central nervous system may estimate self-motion and depth from motion parallax, using equations for the visual velocity field generated by translation of the eye through space. Our results focus on information provided by the movement and deformation of three-dimensional objects and on local flow behavior around a fixated point. All of these issues are addressed mathematically in terms of definite equations for the optic flow. This formal characterization of the visual information presented to the observer is then considered in parallel with other sensory cues to self-motion in order to see how these contribute to the effective use of visual motion parallax, and how parallactic flow can, conversely, contribute to the sense of self-motion. This article will focus on a central case, for understanding of motion parallax in spacious real-world environments, of monocular visual cues observable during pure horizontal translation of the eye through a stationary environment. We suggest that the global optokinetic stimulus associated with visual motion parallax must converge in significant fashion with vestibular and proprioceptive pathways that carry signals related to self-motion. Suggestions of experiments to test some of the predictions of this study are made.  相似文献   

9.
The development and evolution of the inner ear sensory patches and their innervation is reviewed. Recent molecular developmental data suggest that development of these sensory patches is a developmental recapitulation of the evolutionary history. These data suggest that the ear generates multiple, functionally diverse sensory epithelia by dividing a single sensory primordium. Those epithelia will establish distinct identities through the overlapping expression of genes of which only a few are currently known. One of these distinctions is the unique pattern of hair cell polarity. A hypothesis is presented on how the hair cell polarity may relate to the progressive segregation of the six sensory epithelia. Besides being markers for sensory epithelia development, neurotrophins are also expressed in delaminating cells that migrate toward the developing vestibular and cochlear ganglia. These delaminating cells originate from multiple sites at or near the developing sensory epithelia and some also express neuronal markers such as NeuroD. The differential origin of precursors raises the possibility that some sensory neurons acquire positional information before they delaminate the ear. Such an identity of these delaminating sensory neurons may be used both to navigate their dendrites to the area they delaminated from, as well as to help them navigate to their central target. The navigational properties of sensory neurons as well as the acquisition of discrete sensory patch phenotypes implies a much more sophisticated subdivision of the developing otocyst than the few available gene expression studies suggest.  相似文献   

10.
Our anatomical and behavioral studies of embryonic rats that developed in microgravity suggest that the vestibular sensory system, like the visual system, has genetically mediated processes of development that establish crude connections between the periphery and the brain. Environmental stimuli also regulate connection formation including terminal branch formation and fine-tuning of synaptic contacts. Axons of vestibular sensory neurons from gravistatic as well as linear acceleration receptors reach their targets in both microgravity and normal gravity, suggesting that this is a genetically regulated component of development. However, microgravity exposure delays the development of terminal branches and synapses in gravistatic but not linear acceleration-sensitive neurons and also produces behavioral changes. These latter changes reflect environmentally controlled processes of development.  相似文献   

11.
Sensory neurons encode natural stimuli by changes in firing rate or by generating specific firing patterns, such as bursts. Many neural computations rely on the fact that neurons can be tuned to specific stimulus frequencies. It is thus important to understand the mechanisms underlying frequency tuning. In the electrosensory system of the weakly electric fish, Apteronotus leptorhynchus, the primary processing of behaviourally relevant sensory signals occurs in pyramidal neurons of the electrosensory lateral line lobe (ELL). These cells encode low frequency prey stimuli with bursts of spikes and high frequency communication signals with single spikes. We describe here how bursting in pyramidal neurons can be regulated by intrinsic conductances in a cell subtype specific fashion across the sensory maps found within the ELL, thereby regulating their frequency tuning. Further, the neuromodulatory regulation of such conductances within individual cells and the consequences to frequency tuning are highlighted. Such alterations in the tuning of the pyramidal neurons may allow weakly electric fish to preferentially select for certain stimuli under various behaviourally relevant circumstances.  相似文献   

12.
It is much debated on what time scale information is encoded by neuronal spike activity. With a phenomenological model that transforms time-dependent membrane potential fluctuations into spike trains, we investigate constraints for the timing of spikes and for synchronous activity of neurons with common input. The model of spike generation has a variable threshold that depends on the time elapsed since the previous action potential and on the preceding membrane potential changes. To ensure that the model operates in a biologically meaningful range, the model was adjusted to fit the responses of a fly visual interneuron to motion stimuli. The dependence of spike timing on the membrane potential dynamics was analyzed. Fast membrane potential fluctuations are needed to trigger spikes with a high temporal precision. Slow fluctuations lead to spike activity with a rate about proportional to the membrane potential. Thus, for a given level of stochastic input, the frequency range of membrane potential fluctuations induced by a stimulus determines whether a neuron can use a rate code or a temporal code. The relationship between the steepness of membrane potential fluctuations and the timing of spikes has also implications for synchronous activity in neurons with common input. Fast membrane potential changes must be shared by the neurons to produce synchronous activity.  相似文献   

13.
It is still an enigma how human subjects combine visual and vestibular inputs for their self-motion perception. Visual cues have the benefit of high spatial resolution but entail the danger of self motion illusions. We performed psychophysical experiments (verbal estimates as well as pointer indications of perceived self-motion in space) in normal subjects (Ns) and patients with loss of vestibular function (Ps). Subjects were presented with horizontal sinusoidal rotations of an optokinetic pattern (OKP) alone (visual stimulus; 0.025-3.2 Hz; displacement amplitude, 8 degrees) or in combinations with rotations of a Bárány chair (vestibular stimulus; 0.025-0.4 Hz; +/- 8 degrees). We found that specific instructions to the subjects created different perceptual states in which their self-motion perception essentially reflected three processing steps during pure visual stimulation: i) When Ns were primed by a procedure based on induced motion and then they estimated perceived self-rotation upon pure optokinetic stimulation (circular vection, CV), the CV has a gain close to unity up to frequencies of almost 0.8 Hz, followed by a sharp decrease at higher frequencies (i.e., characteristics resembling those of the optokinetic reflex, OKR, and of smooth pursuit, SP). ii) When Ns were instructed to "stare through" the optokinetic pattern, CV was absent at high frequency, but increasingly developed as frequency was decreased below 0.1 Hz. iii) When Ns "looked at" the optokinetic pattern (accurately tracked it with their eyes) CV was usually absent, even at low frequency. CV in Ps showed similar dynamics as in Ns in condition i), independently of the instruction. During vestibular stimulation, self-motion perception in Ns fell from a maximum at 0.4 Hz to zero at 0.025 Hz. When vestibular stimulation was combined with visual stimulation while Ns "stared through" OKP, perception at low frequencies became modulated in magnitude. When Ns "looked" at OKP, this modulation was reduced, apart from the synergistic stimulus combination (OKP stationary) where magnitude was similar as during "staring". The obtained gain and phase curves of the perception were incompatible with linear systems prediction. We therefore describe the present findings by a non-linear dynamic model in which the visual input is processed in three steps: i) It shows dynamics similar to those of OKR and SP; ii) it is shaped to complement the vestibular dynamics and is fused with a vestibular signal by linear summation; and iii) it can be suppressed by a visual-vestibular conflict mechanism when the visual scene is moving in space. Finally, an important element of the model is a velocity threshold of about 1.2 degrees/s which is instrumental in maintaining perceptual stability and in explaining the observed dynamics of perception. We conclude from the experimental and theoretical evidence that self-motion perception normally is related to the visual scene as a reference, while the vestibular input is used to check the kinematic state of the scene; if the scene appears to move, the visual signal becomes suppressed and perception is based on the vestibular cue.  相似文献   

14.
This article addresses the intersection between perceptual estimates of head motion based on purely vestibular and purely visual sensation, by considering how nonvisual (e.g. vestibular and proprioceptive) sensory signals for head and eye motion can be combined with visual signals available from a single landmark to generate a complete perception of self-motion. In order to do this, mathematical dimensions of sensory signals and perceptual parameterizations of self-motion are evaluated, and equations for the sensory-to-perceptual transition are derived. With constant velocity translation and vision of a single point, it is shown that visual sensation allows only for the externalization, to the frame of reference given by the landmark, of an inertial self-motion estimate from nonvisual signals. However, it is also shown that, with nonzero translational acceleration, use of simple visual signals provides a biologically plausible strategy for integration of inertial acceleration sensation, to recover translational velocity. A dimension argument proves similar results for horizontal flow of any number of discrete visible points. The results provide insight into the convergence of visual and vestibular sensory signals for self-motion and indicate perceptual algorithms by which primitive visual and vestibular signals may be integrated for self-motion perception.  相似文献   

15.
Recent work on the coding of spatial information in central otolith neurons has significantly advanced our knowledge of signal transformation from head-fixed otolith coordinates to space-centered coordinates during motion. In this review, emphasis is placed on the neural mechanisms by which signals generated at the bilateral labyrinths are recognized as gravity-dependent spatial information and in turn as substrate for otolithic reflexes. We first focus on the spatiotemporal neuronal response patterns (i.e. one- and two-dimensional neurons) to pure otolith stimulation, as assessed by single unit recording from the vestibular nucleus in labyrinth-intact animals. These spatiotemporal features are also analyzed in association with other electrophysiological properties to evaluate their role in the central construction of a spatial frame of reference in the otolith system. Data derived from animals with elimination of inputs from one labyrinth then provide evidence that during vestibular stimulation signals arising from a single utricle are operative at the level of both the ipsilateral and contralateral vestibular nuclei. Hemilabyrinthectomy also revealed neural asymmetries in spontaneous activity, response dynamics and spatial coding behavior between neuronal subpopulations on the two sides and as a result suggested a segregation of otolith signals reaching the ipsilateral and contralateral vestibular nuclei. Recent studies have confirmed and extended previous observations that the recovery of resting activity within the vestibular nuclear complex during vestibular compensation is related to changes in both intrinsic membrane properties and capacities to respond to extracellular factors. The bilateral imbalance provides the basis for deranged spatial coding and motor deficits accompanying hemilabyrinthectomy. Taken together, these experimental findings indicate that in the normal state converging inputs from bilateral vestibular labyrinths are essential to spatiotemporal signal transformation at the central otolith neurons during low-frequency head movements.  相似文献   

16.
To analyze the information provided about individual visual stimuliin the responses of single neurons in the primate temporal lobevisual cortex, neuronal responses to a set of 65 visual stimuli wererecorded in macaques performing a visual fixation task and analyzedusing information theoretical measures. The population of neuronsanalyzed responded primarily to faces. The stimuli included 23 facesand 42 nonface images of real-world scenes, so that the function ofthis brain region could be analyzed when it was processing relativelynatural scenes.It was found that for the majority of the neurons significantamounts of information were reflected about which of several of the23 faces had been seen. Thus the representation was not local, forin a local representation almost all the information available canbe obtained when the single stimulus to which the neuron respondsbest is shown. It is shown that the information available about anyone stimulus depended on how different (for example, how manystandard deviations) the response to that stimulus was from theaverage response to all stimuli. This was the case for responsesbelow the average response as well as above.It is shown that the fraction of information carried by the lowfiring rates of a cell was large—much larger than that carried bythe high firing rates. Part of the reason for this is that theprobability distribution of different firing rates is biased towardlow values (though with fewer very low values than would bepredicted by an exponential distribution). Another factor is thatthe variability of the response is large at intermediate and highfiring rates.Another finding is that at short sampling intervals (such as 20 ms)the neurons code information efficiently, by effectively acting asbinary variables and behaving less noisily than would be expectedof a Poisson process.  相似文献   

17.
Noise-induced complete synchronization and frequency synchronization in coupled spiking and bursting neurons are studied firstly. The effects of noise and coupling are discussed. It is found that bursting neurons are easier to achieve firing synchronization than spiking ones, which means that bursting activities are more important for information transfer in neuronal networks. Secondly, the effects of noise on firing synchronization in a noisy map neuronal network are presented. Noise-induced synchronization and temporal order are investigated by means of the firing rate function and the order index. Firing synchronization and temporal order of excitatory neurons can be greatly enhanced by subthreshold stimuli with resonance frequency. Finally, it is concluded that random perturbations play an important role in firing activities and temporal order in neuronal networks.  相似文献   

18.
The effects of waking and sleep on the response properties of auditory units in the ventral cochlear nucleus (CN) were explored by using extracellular recordings in chronic guinea-pigs. Significant increases and decreases in firing rate were detected in two neuronal groups, a) the "sound-responding" and b) the "spontaneous" (units that do not show responses to any acoustic stimuli controlled by the experimenter). The "spontaneous" may be considered as belonging to the auditory system because the corresponding units showed a suppression of their discharge when the receptor was destroyed. The auditory CN units were characterized by their PSTH in response to tones at their characteristic frequency and also by the changes in firing rate and probability of discharge evaluated during periods of waking, slow wave and paradoxical sleep. The CNS performs functions dependent on sensory inputs during wakefulness and sleep phases. By studying the auditory input at the level of the ventral CN with constant sound stimuli, it was shown that, in addition to the firing rate shifts, some units presented changes in the temporal probability of discharge, implying central actions on the corresponding neurons. The mean latency of the responses, however, did not show significant changes throughout the sleep-waking cycle. The auditory efferent pathways are postulated to modulate the auditory input at CN level during different animal states. The probability of firing and the changes in the temporal pattern, as shown by the PSTH, are thus dependent on both the auditory input and the functional brain state related to the sleep-waking cycle.  相似文献   

19.
Experimental evidence suggests that spontaneous neuronal activity may shape and be shaped by sensory experience. However, we lack information on how sensory experience modulates the underlying synaptic dynamics and how such modulation influences the response of the network to future events. Here we study whether spike-timing-dependent plasticity (STDP) can mediate sensory-induced modifications in the spontaneous dynamics of a new large-scale model of layers II, III and IV of the rodent barrel cortex. Our model incorporates significant physiological detail, including the types of neurons present, the probabilities and delays of connections, and the STDP profiles at each excitatory synapse. We stimulated the neuronal network with a protocol of repeated sensory inputs resembling those generated by the protraction-retraction motion of whiskers when rodents explore their environment, and studied the changes in network dynamics. By applying dimensionality reduction techniques to the synaptic weight space, we show that the initial spontaneous state is modified by each repetition of the stimulus and that this reverberation of the sensory experience induces long-term, structured modifications in the synaptic weight space. The post-stimulus spontaneous state encodes a memory of the stimulus presented, since a different dynamical response is observed when the network is presented with shuffled stimuli. These results suggest that repeated exposure to the same sensory experience could induce long-term circuitry modifications via 'Hebbian' STDP plasticity.  相似文献   

20.
The ability to orient and navigate through the terrestrial environment represents a computational challenge common to all vertebrates. It arises because motion sensors in the inner ear, the otolith organs, and the semicircular canals transduce self-motion in an egocentric reference frame. As a result, vestibular afferent information reaching the brain is inappropriate for coding our own motion and orientation relative to the outside world. Here we show that cerebellar cortical neuron activity in vermal lobules 9 and 10 reflects the critical computations of transforming head-centered vestibular afferent information into earth-referenced self-motion and spatial orientation signals. Unlike vestibular and deep cerebellar nuclei neurons, where a mixture of responses was observed, Purkinje cells represent a homogeneous population that encodes inertial motion. They carry the earth-horizontal component of a spatially transformed and temporally integrated rotation signal from the semicircular canals, which is critical for computing head attitude, thus isolating inertial linear accelerations during navigation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号