首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
It is well known that some neurons tend to fire packets of action potentials followed by periods of quiescence (bursts) while others within the same stage of sensory processing fire in a tonic manner. However, the respective computational advantages of bursting and tonic neurons for encoding time varying signals largely remain a mystery. Weakly electric fish use cutaneous electroreceptors to convey information about sensory stimuli and it has been shown that some electroreceptors exhibit bursting dynamics while others do not. In this study, we compare the neural coding capabilities of tonically firing and bursting electroreceptor model neurons using information theoretic measures. We find that both bursting and tonically firing model neurons efficiently transmit information about the stimulus. However, the decoding mechanisms that must be used for each differ greatly: a non-linear decoder would be required to extract all the available information transmitted by the bursting model neuron whereas a linear one might suffice for the tonically firing model neuron. Further investigations using stimulus reconstruction techniques reveal that, unlike the tonically firing model neuron, the bursting model neuron does not encode the detailed time course of the stimulus. A novel measure of feature detection reveals that the bursting neuron signals certain stimulus features. Finally, we show that feature extraction and stimulus estimation are mutually exclusive computations occurring in bursting and tonically firing model neurons, respectively. Our results therefore suggest that stimulus estimation and feature extraction might be parallel computations in certain sensory systems rather than being sequential as has been previously proposed.  相似文献   

2.
Since the world consists of objects that stimulate multiple senses, it is advantageous for a vertebrate to integrate all the sensory information available. However, the precise mechanisms governing the temporal dynamics of multisensory processing are not well understood. We develop a computational modeling approach to investigate these mechanisms. We present an oscillatory neural network model for multisensory learning based on sparse spatio-temporal encoding. Recently published results in cognitive science show that multisensory integration produces greater and more efficient learning. We apply our computational model to qualitatively replicate these results. We vary learning protocols and system dynamics, and measure the rate at which our model learns to distinguish superposed presentations of multisensory objects. We show that the use of multiple channels accelerates learning and recall by up to 80%. When a sensory channel becomes disabled, the performance degradation is less than that experienced during the presentation of non-congruent stimuli. This research furthers our understanding of fundamental brain processes, paving the way for multiple advances including the building of machines with more human-like capabilities.  相似文献   

3.
Bayesian multisensory integration and cross-modal spatial links.   总被引:2,自引:0,他引:2  
Our perception of the word is the result of combining information between several senses, such as vision, audition and proprioception. These sensory modalities use widely different frames of reference to represent the properties and locations of object. Moreover, multisensory cues come with different degrees of reliability, and the reliability of a given cue can change in different contexts. The Bayesian framework--which we describe in this review--provides an optimal solution to deal with this issue of combining cues that are not equally reliable. However, this approach does not address the issue of frames of references. We show that this problem can be solved by creating cross-modal spatial links in basis function networks. Finally, we show how the basis function approach can be combined with the Bayesian framework to yield networks that can perform optimal multisensory combination. On the basis of this theory, we argue that multisensory integration is a dialogue between sensory modalities rather that the convergence of all sensory information onto a supra-modal area.  相似文献   

4.
We show how simulated robots evolved for the ability to display a context-dependent periodic behavior can spontaneously develop an internal model and rely on it to fulfill their task when sensory stimulation is temporarily unavailable. The analysis of some of the best evolved agents indicates that their internal model operates by anticipating sensory stimuli. More precisely, it anticipates functional properties of the next sensory state rather than the exact state that sensors will assume. The characteristics of the states that are anticipated and of the sensorimotor rules that determine how the agents react to the experienced states, however, ensure that they produce very similar behaviour during normal and blind phases in which sensory stimulation is available or is self-generated by the agent, respectively. Agents’ internal models also ensure an effective transition during the phases in which agents’ internal dynamics is decoupled and re-coupled with the sensorimotor flow. Our results suggest that internal models might have arisen for behavioral reasons and successively exapted for other cognitive functions. Moreover, the obtained results suggest that self-generated internal states should not necessarily match in detail the corresponding sensory states and might rather encode more abstract and motor-oriented information.  相似文献   

5.
A model is presented to study and quantify the contribution of all available sensory information to human standing based on optimal estimation theory. In the model, delayed sensory information is integrated in such a way that a best estimate of body orientation is obtained. The model approach agrees with the present theory of the goal of human balance control. The model is not based on purely inverted pendulum body dynamics, but rather on a three-link segment model of a standing human on a movable support base. In addition, the model is non-linear and explicitly addresses the problem of multisensory integration and neural time delays. A predictive element is included in the controller to compensate for time delays, necessary to maintain erect body orientation. Model results of sensory perturbations on total body sway closely resemble experimental results. Despite internal and external perturbations, the controller is able to stabilise the model of an inherently unstable standing human with neural time delays of 100 ms. It is concluded, that the model is capable of studying and quantifying multisensory integration in human stance control. We aim to apply the model in (1) the design and development of prostheses and orthoses and (2) the diagnosis of neurological balance disorders. Received: 25 August 1997 / Accepted in revised form: 8 December 1998  相似文献   

6.
Prior research has shown that representations of retinal surfaces can be learned from the intrinsic structure of visual sensory data in neural simulations, in robots, as well as by animals. Furthermore, representations of cochlear (frequency) surfaces can be learned from auditory data in neural simulations. Advances in hardware technology have allowed the development of artificial skin for robots, realising a new sensory modality which differs in important respects from vision and audition in its sensorimotor characteristics. This provides an opportunity to further investigate ordered sensory map formation using computational tools. We show that it is possible to learn representations of non-trivial tactile surfaces, which require topologically and geometrically involved three-dimensional embeddings. Our method automatically constructs a somatotopic map corresponding to the configuration of tactile sensors on a rigid body, using only intrinsic properties of the tactile data. The additional complexities involved in processing the tactile modality require the development of a novel multi-dimensional scaling algorithm. This algorithm, ANISOMAP, extends previous methods and outperforms them, producing high-quality reconstructions of tactile surfaces in both simulation and hardware tests. In addition, the reconstruction turns out to be robust to unanticipated hardware failure.  相似文献   

7.
The precise timing of action potentials of sensory neurons relative to the time of stimulus presentation carries substantial sensory information that is lost or degraded when these responses are summed over longer time windows. However, it is unclear whether and how downstream networks can access information in precise time-varying neural responses. Here, we review approaches to test the hypothesis that the activity of neural populations provides the temporal reference frames needed to decode temporal spike patterns. These approaches are based on comparing the single-trial stimulus discriminability obtained from neural codes defined with respect to network-intrinsic reference frames to the discriminability obtained from codes defined relative to the experimenter''s computer clock. Application of this formalism to auditory, visual and somatosensory data shows that information carried by millisecond-scale spike times can be decoded robustly even with little or no independent external knowledge of stimulus time. In cortex, key components of such intrinsic temporal reference frames include dedicated neural populations that signal stimulus onset with reliable and precise latencies, and low-frequency oscillations that can serve as reference for partitioning extended neuronal responses into informative spike patterns.  相似文献   

8.
Most conventional robots rely on controlling the location of the center of pressure to maintain balance, relying mainly on foot pressure sensors for information. By contrast, humans rely on sensory data from multiple sources, including proprioceptive, visual, and vestibular sources. Several models have been developed to explain how humans reconcile information from disparate sources to form a stable sense of balance. These models may be useful for developing robots that are able to maintain dynamic balance more readily using multiple sensory sources. Since these information sources may conflict, reliance by the nervous system on any one channel can lead to ambiguity in the system state. In humans, experiments that create conflicts between different sensory channels by moving the visual field or the support surface indicate that sensory information is adaptively reweighted. Unreliable information is rapidly down-weighted, then gradually up-weighted when it becomes valid again. Human balance can also be studied by building robots that model features of human bodies and testing them under similar experimental conditions. We implement a sensory reweighting model based on an adaptive Kalman filter in a bipedal robot, and subject it to sensory tests similar to those used on human subjects. Unlike other implementations of sensory reweighting in robots, our implementation includes vision, by using optic flow to calculate forward rotation using a camera (visual modality), as well as a three-axis gyro to represent the vestibular system (non-visual modality), and foot pressure sensors (proprioceptive modality). Our model estimates measurement noise in real time, which is then used to recompute the Kalman gain on each iteration, improving the ability of the robot to dynamically balance. We observe that we can duplicate many important features of postural sway in humans, including automatic sensory reweighting, effects, constant phase with respect to amplitude, and a temporal asymmetry in the reweighting gains.  相似文献   

9.
Biological organisms continuously select and sample information used by their neural structures for perception and action, and for creating coherent cognitive states guiding their autonomous behavior. Information processing, however, is not solely an internal function of the nervous system. Here we show, instead, how sensorimotor interaction and body morphology can induce statistical regularities and information structure in sensory inputs and within the neural control architecture, and how the flow of information between sensors, neural units, and effectors is actively shaped by the interaction with the environment. We analyze sensory and motor data collected from real and simulated robots and reveal the presence of information structure and directed information flow induced by dynamically coupled sensorimotor activity, including effects of motor outputs on sensory inputs. We find that information structure and information flow in sensorimotor networks (a) is spatially and temporally specific; (b) can be affected by learning, and (c) can be affected by changes in body morphology. Our results suggest a fundamental link between physical embeddedness and information, highlighting the effects of embodied interactions on internal (neural) information processing, and illuminating the role of various system components on the generation of behavior.  相似文献   

10.
Neurons in sensory systems can represent information not only by their firing rate, but also by the precise timing of individual spikes. For example, certain retinal ganglion cells, first identified in the salamander, encode the spatial structure of a new image by their first-spike latencies. Here we explore how this temporal code can be used by downstream neural circuits for computing complex features of the image that are not available from the signals of individual ganglion cells. To this end, we feed the experimentally observed spike trains from a population of retinal ganglion cells to an integrate-and-fire model of post-synaptic integration. The synaptic weights of this integration are tuned according to the recently introduced tempotron learning rule. We find that this model neuron can perform complex visual detection tasks in a single synaptic stage that would require multiple stages for neurons operating instead on neural spike counts. Furthermore, the model computes rapidly, using only a single spike per afferent, and can signal its decision in turn by just a single spike. Extending these analyses to large ensembles of simulated retinal signals, we show that the model can detect the orientation of a visual pattern independent of its phase, an operation thought to be one of the primitives in early visual processing. We analyze how these computations work and compare the performance of this model to other schemes for reading out spike-timing information. These results demonstrate that the retina formats spatial information into temporal spike sequences in a way that favors computation in the time domain. Moreover, complex image analysis can be achieved already by a simple integrate-and-fire model neuron, emphasizing the power and plausibility of rapid neural computing with spike times.  相似文献   

11.
Dyslexic children, besides difficulties in mastering literacy, also show poor postural control that might be related to how sensory cues coming from different sensory channels are integrated into proper motor activity. Therefore, the aim of this study was to examine the relationship between sensory information and body sway, with visual and somatosensory information manipulated independent and concurrently, in dyslexic children. Thirty dyslexic and 30 non-dyslexic children were asked to stand as still as possible inside of a moving room either with eyes closed or open and either lightly touching a moveable surface or not for 60 seconds under five experimental conditions: (1) no vision and no touch; (2) moving room; (3) moving bar; (4) moving room and stationary touch; and (5) stationary room and moving bar. Body sway magnitude and the relationship between room/bar movement and body sway were examined. Results showed that dyslexic children swayed more than non-dyslexic children in all sensory condition. Moreover, in those trials with conflicting vision and touch manipulation, dyslexic children swayed less coherent with the stimulus manipulation compared to non-dyslexic children. Finally, dyslexic children showed higher body sway variability and applied higher force while touching the bar compared to non-dyslexic children. Based upon these results, we can suggest that dyslexic children are able to use visual and somatosensory information to control their posture and use the same underlying neural control processes as non-dyslexic children. However, dyslexic children show poorer performance and more variability while relating visual and somatosensory information and motor action even during a task that does not require an active cognitive and motor involvement. Further, in sensory conflict conditions, dyslexic children showed less coherent and more variable body sway. These results suggest that dyslexic children have difficulties in multisensory integration because they may suffer from integrating sensory cues coming from multiple sources.  相似文献   

12.
Electrophysiological recordings performed in the mammalian olfactory bulb (OB) aimed at deciphering neural rules supporting neural representation of odors. In spite of a fairly large number of available data, no clear picture emerges yet in the mammalian OB. This paper summarizes some important findings and underlines the fact that difference in experimental conditions still represents a major limitation to the emergence of a synthetic view. More specifically, we examine to what extent the absence or the presence of anaesthetic influence OB neuronal responsiveness. In addition, we will see that recordings of either single cell activity or populational activity provide quite different pictures. As a result some experimental approaches provide data underlying sensory properties of OB neurons while others emphasize their capabilities of integrating incoming sensory information with attention, motivation and previous experience.  相似文献   

13.
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.  相似文献   

14.
The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and ‘more abstract’ representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as ‘action’ the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network.  相似文献   

15.
In the field of the neurobiology of learning, significant emphasis has been placed on understanding neural plasticity within a single structure (or synapse type) as it relates to a particular type of learning mediated by a particular brain area. To appreciate fully the breadth of the plasticity responsible for complex learning phenomena, it is imperative that we also examine the neural mechanisms of the behavioral instantiation of learned information, how motivational systems interact, and how past memories affect the learning process. To address this issue, we describe a model of complex learning (rodent adaptive navigation) that could be used to study dynamically interactive neural systems. Adaptive navigation depends on the efficient integration of external and internal sensory information with motivational systems to arrive at the most effective cognitive and/or behavioral strategies. We present evidence consistent with the view that during navigation: 1) the limbic thalamus and limbic cortex is primarily responsible for the integration of current and expected sensory information, 2) the hippocampal-septal-hypothalamic system provides a mechanism whereby motivational perspectives bias sensory processing, and 3) the amygdala-prefrontal-striatal circuit allows animals to evaluate the expected reinforcement consequences of context-dependent behavioral responses. Although much remains to be determined regarding the nature of the interactions among neural systems, new insights have emerged regarding the mechanisms that underlie flexible and adaptive behavioral responses.  相似文献   

16.
Animals rely on sensory feedback to generate accurate, reliable movements. In many flying insects, strain-sensitive neurons on the wings provide rapid feedback that is critical for stable flight control. While the impacts of wing structure on aerodynamic performance have been widely studied, the impacts of wing structure on sensing are largely unexplored. In this paper, we show how the structural properties of the wing and encoding by mechanosensory neurons interact to jointly determine optimal sensing strategies and performance. Specifically, we examine how neural sensors can be placed effectively on a flapping wing to detect body rotation about different axes, using a computational wing model with varying flexural stiffness. A small set of mechanosensors, conveying strain information at key locations with a single action potential per wingbeat, enable accurate detection of body rotation. Optimal sensor locations are concentrated at either the wing base or the wing tip, and they transition sharply as a function of both wing stiffness and neural threshold. Moreover, the sensing strategy and performance is robust to both external disturbances and sensor loss. Typically, only five sensors are needed to achieve near-peak accuracy, with a single sensor often providing accuracy well above chance. Our results show that small-amplitude, dynamic signals can be extracted efficiently with spatially and temporally sparse sensors in the context of flight. The demonstrated interaction of wing structure and neural encoding properties points to the importance of understanding each in the context of their joint evolution.  相似文献   

17.
In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.  相似文献   

18.
Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.  相似文献   

19.
Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations.  相似文献   

20.
Capturing nature’s statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl’s midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号