首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

2.
3.
In every motor task, our brain must handle external forces acting on the body. For example, riding a bike on cobblestones or skating on irregular surface requires us to appropriately respond to external perturbations. In these situations, motor predictions cannot help anticipate the motion of the body induced by external factors, and direct use of delayed sensory feedback will tend to generate instability. Here, we show that to solve this problem the motor system uses a rapid sensory prediction to correct the estimated state of the limb. We used a postural task with mechanical perturbations to address whether sensory predictions were engaged in upper-limb corrective movements. Subjects altered their initial motor response in ∼60 ms, depending on the expected perturbation profile, suggesting the use of an internal model, or prior, in this corrective process. Further, we found trial-to-trial changes in corrective responses indicating a rapid update of these perturbation priors. We used a computational model based on Kalman filtering to show that the response modulation was compatible with a rapid correction of the estimated state engaged in the feedback response. Such a process may allow us to handle external disturbances encountered in virtually every physical activity, which is likely an important feature of skilled motor behaviour.  相似文献   

4.
The hypothesis is proposed that the central dynamics of the action–perception cycle has five steps: emergence from an existing macroscopic brain state of a pattern that predicts a future goal state; selection of a mesoscopic frame for action control; execution of a limb trajectory by microscopic spike activity; modification of microscopic cortical spike activity by sensory inputs; construction of mesoscopic perceptual patterns; and integration of a new macroscopic brain state. The basis is the circular causality between microscopic entities (neurons) and the mesoscopic and macroscopic entities (populations) self-organized by axosynaptic interactions. Self-organization of neural activity is bidirectional in all cortices. Upwardly the organization of mesoscopic percepts from microscopic spike input predominates in primary sensory areas. Downwardly the organization of spike outputs that direct specific limb movements is by mesoscopic fields constituting plans to achieve predicted goals. The mesoscopic fields in sensory and motor cortices emerge as frames within macroscopic activity. Part 1 describes the action–perception cycle and its derivative reflex arc qualitatively. Part 2 describes the perceptual limb of the arc from microscopic MSA to mesoscopic wave packets, and from these to macroscopic EEG and global ECoG fields that express experience-dependent knowledge in successive states. These macroscopic states are conceived to embed and control mesoscopic frames in premotor and motor cortices that are observed in local ECoG and LFP of frontoparietal areas. The fields sampled by ECoG and LFP are conceived as local patterns of neural activity in which trajectories of multiple spike activities (MSA) emerge that control limb movements. Mesoscopic frames are located by use of the analytic signal from the Hilbert transform after band pass filtering. The state variables in frames are measured to construct feature vectors by which to describe and classify frame patterns. Evidence is cited to justify use of linear analysis. The aim of the review is to enable researchers to conceive and identify goal-oriented states in brain activity for use as commands, in order to relegate the details of execution to adaptive control devices outside the brain. http://sulcus.berkeley.edu  相似文献   

5.
An information-based technique is described for applications in mechanical property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force–displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model.  相似文献   

6.
Orthopteroid insects (cockroaches, crickets, locusts and related species) allow examination of active sensory processing in a comparative framework. Some orthopteroids possess long, mobile antennae endowed with many chemo- and mechanoreceptors. When the antennae are touched, an animal's response depends upon the identity of the stimulus. For example, contact with a predator may lead to escape, but contact with a conspecific may usually not. Active touch of an approaching object influences the likelihood that a discrimination of identity will be made. Using cockroaches, we have identified specific descending mechanosensory interneurons that trigger antennal-mediated escape. Crucial sensory input to these cells comes from chordotonal organs within the antennal base. However, information from other receptors on the base or the long antennal flagellum allows active touch to modulate escape probability based on stimulus identity. This is conveyed, at least to some extent, by textural information. Guidance of the antennae in active exploration depends on visual information. Some of the visual interneurons and the motor neurons necessary for visuomotor control have been identified. Comparisons across Orthoptera suggest an evolutionary model where subtle changes in the architecture of interneurons, and of sensorimotor control loops, may explain differing levels of vision-touch interaction in the active guidance of behaviour.  相似文献   

7.
In a recent experiment for determining the mechanical response of brain in vivo. a probe, inserted through scalp, skull and dura, is placed in contact with and normal to the brain, given a prescribed motion, and the time variation of corresponding force is measured. In the corresponding continuum mechanical model, brain is idealized as a linear isotropic viscoelastic solid constrained by a rigid skull. At the mating surface, the shear stress and normal displacement vanish everywhere except under the probe which exerts a local radial displacement. This model introduces effective viscoelastic moduli in shear, which is unknown, and in dilatation, which is considered known from other sources. Part II of this study is concerned with stress relaxation induced by a small step displacement of the probe. From the solution of the corresponding quasi-static boundary value problem, a nonlinear Volterra integral equation is established from which the shear stress relaxation function can be solved in terms of measured probe displacement and force. A numerical method of solution is developed.  相似文献   

8.
Adaptability of reaching movements depends on a computation in the brain that transforms sensory cues, such as those that indicate the position and velocity of the arm, into motor commands. Theoretical consideration shows that the encoding properties of neural elements implementing this transformation dictate how errors should generalize from one limb position and velocity to another. To estimate how sensory cues are encoded by these neural elements, we designed experiments that quantified spatial generalization in environments where forces depended on both position and velocity of the limb. The patterns of error generalization suggest that the neural elements that compute the transformation encode limb position and velocity in intrinsic coordinates via a gain-field; i.e., the elements have directionally dependent tuning that is modulated monotonically with limb position. The gain-field encoding makes the counterintuitive prediction of hypergeneralization: there should be growing extrapolation beyond the trained workspace. Furthermore, nonmonotonic force patterns should be more difficult to learn than monotonic ones. We confirmed these predictions experimentally.  相似文献   

9.
E. Evans  K. Ritchie    R. Merkel 《Biophysical journal》1995,68(6):2580-2587
Adhesion and cytoskeletal structure are intimately related in biological cell function. Even with the vast amount of biological and biochemical data that exist, little is known at the molecular level about physical mechanisms involved in attachments between cells or about consequences of adhesion on the material structure. To expose physical actions at soft biological interfaces, we have combined an ultrasensitive transducer and reflection interference microscopy to image submicroscopic displacements of probe contact with a test surface under minuscule forces. The transducer is a cell-size membrane capsule pressurized by micropipette suction where displacement normal to the membrane under tension is proportional to the applied force. Pressure control of the tension tunes the sensitivity in operation over four orders of magnitude through a range of force from 0.01 pN up to the strength of covalent bonds (approximately 1000 pN)! As the surface probe, a microscopic bead is biochemically glued to the transducer with a densely-bound ligand that is indifferent to the test surface. Movements of the probe under applied force are resolved down to an accuracy of approximately 5 nm from the interference fringe pattern created by light reflected from the bead. With this arrangement, we show that local mechanical compliance of a cell surface can be measured at a displacement resolution set by structural fluctuations. When desired, a second ligand is bound sparsely to the probe for focal adhesion to specific receptors in the test surface. We demonstrate that monitoring fluctuations in probe position at low transducer stiffness enhances detection of molecular adhesion and activation of cytoskeletal structure.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

10.
Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations.  相似文献   

11.
Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or “entrained”) to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research—including neural entrainment and tACS—and reveal endogenous neural oscillations as a key underlying principle for speech perception.

Just as a child on a swing continues to move after the pushing stops, this study reveals similar entrained rhythmic echoes in brain activity after hearing speech and electrical brain stimulation; perturbation with tACS shows that these brain oscillations help listeners to understand speech.  相似文献   

12.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

13.
When saccading to a silent clock, observers sometimes think that the second hand has paused momentarily. This effect has been termed chronostasis and occurs because observers overestimate the time that they have seen the object of an eye movement. They seem to extrapolate its appearance back to just prior to the onset of the saccade rather than the time that it is actually fixated on the retina. Here, we describe a similar effect following an arm movement: subjects overestimate the time that their hand has been in contact with a newly touched object. The illusion's magnitude suggests backward extrapolation of tactile perception to a moment during the preceding reach. The illusion does not occur if the arm movement triggers a change in a continuously visible visual target: the time of onset of the change is estimated correctly. We hypothesize that chronostasis-like effects occur when movement produces uncertainty about the onset of a sensory event. Under these circumstances, the time at which neurons with receptive fields that shift in the temporal vicinity of a movement change their mappings may be used as a time marker for the onset of perceptual properties that are only established later.  相似文献   

14.
The limb used to perform seven common activities was recorded during weekly observations of an infant orang-utan. For five of these behaviors, preferences were found that remained consistent in their direction, although there were week to week fluctuations in magnitude. Most notably, a right hand preference was found for nonfood reaching and a right hindlimb preference appeared for initiating locomotion. Although initially, food reaching was predominantly with the right hand, a shift toward preferential left hand use occurred during the final weeks of the study. Additionally, a left hand preference appeared when the infant touched either its body or head.  相似文献   

15.

Background

It has been reported that participants judge an object to be closer after a stick has been used to touch it than after touching it with the hand. In this study we try to find out why this is so.

Methodology

We showed six participants a cylindrical object on a table. On separate trials (randomly intermixed) participants either estimated verbally how far the object is from their body or they touched a remembered location. Touching was done either with the hand or with a stick (in separate blocks). In three different sessions, participants touched either the object location or the location halfway to the object location. Verbal judgments were given either in centimeters or in terms of whether the object would be reachable with the hand. No differences in verbal distance judgments or touching responses were found between the blocks in which the stick or the hand was used.

Conclusion

Instead of finding out why the judged distance changes when using a tool, we found that using a stick does not necessarily alter judged distances or judgments about the reachability of objects.  相似文献   

16.
Frisking the whiskers: patterned sensory input in the rat vibrissa system   总被引:6,自引:0,他引:6  
Mehta SB  Kleinfeld D 《Neuron》2004,41(2):181-184
How are two prominent environmental features, surface texture and object location, transduced and encoded as rats whisk? Recent papers show that textures may excite intrinsic mechanical vibrations of the vibrissae. Although these vibrations are too rapid to be directly followed by cortical neurons, there is evidence that their speed is encoded by contact-dependent sensory signals. In addition to contact, sensory signals exist that report the angular position of the vibrissae. The combination of contact and reference signals may be used to decode spatial variations in the environment, particularly the location of objects in head-centered coordinates.  相似文献   

17.
Cooperative object transport in distributed multi-robot systems requires the coordination and synchronisation of pushing/pulling forces by a group of autonomous robots in order to transport items that cannot be transported by a single agent. The results of this study show that fairly robust and scalable collective transport strategies can be generated by robots equipped with a relatively simple sensory apparatus (i.e. no force sensors and no devices for direct communication). In the experiments described in this paper, homogeneous groups of physical e-puck robots are required to coordinate and synchronise their actions in order to transport a heavy rectangular cuboid object as far as possible from its starting position to an arbitrary direction. The robots are controlled by dynamic neural networks synthesised using evolutionary computation techniques. The best evolved controller demonstrates an effective group transport strategy that is robust to variability in the physical characteristics of the object (i.e. object mass and size of the longest object’s side) and scalable to different group sizes. To run these experiments, we designed, built, and mounted on the robots a new sensor that returns the agents’ displacement on a 2D plane. The study shows that the feedback generated by the robots’ sensors relative to the object’s movement is sufficient to allow the robots to coordinate their efforts and to sustain the transports for an extended period of time. By extensively analysing successful behavioural strategies, we illustrate the nature of the operational mechanisms underpinning the coordination and synchronisation of actions during group transport.  相似文献   

18.
The ability to integrate multisensory information is a fundamental characteristic of the brain serving to enhance the detection and identification of external stimuli. Weakly electric fish employ multiple senses in their interactions with one another and with their inanimate environment (electric, visual, acoustic, mechanical, chemical, thermal, and hydrostatic pressure) and also generate signals using some of the same stimulus energies (electric, acoustic, visual, mechanical). A brief overview provides background on the sensory and motor channels available to the fish followed by an examination of how weakly electric fish 'benefit' from integrating various stimulus modalities that assist in prey detection, schooling, foraging, courtship, and object location. Depending on environmental conditions, multiple sensory inputs can act synergistically and improve the task at hand, can be redundant or contradictory, and can substitute for one another. Over time, in repeated encounters with familiar surrounds, loss of one modality can be compensated for through learning. Studies of neuronal substrates and an understanding of the computational algorithms that underlie multisensory integration ought to expose the physiological corollaries to widely published concepts such as internal representation, sensory expectation, sensory generalization, and sensory transfer.  相似文献   

19.
Tissues are shaped and patterned by mechanical and chemical processes. A key mechanical process is the positioning of the mitotic spindle, which determines the size and location of the daughter cells within the tissue. Recent force and position‐fluctuation measurements indicate that pushing forces, mediated by the polymerization of astral microtubules against­ the cell cortex, maintain the mitotic spindle at the cell center in Caenorhabditis elegans embryos. The magnitude of the centering forces suggests that the physical limit on the accuracy and precision of this centering mechanism is determined by the number of pushing microtubules rather than by thermally driven fluctuations. In cells that divide asymmetrically, anti‐centering, pulling forces generated by cortically located dyneins, in conjunction with microtubule depolymerization, oppose the pushing forces to drive spindle displacements away from the center. Thus, a balance of centering pushing forces and anti‐centering pulling forces localize the mitotic spindles within dividing C. elegans cells.  相似文献   

20.
In this paper we discuss a new perspective on how the central nervous system (CNS) represents and solves some of the most fundamental computational problems of motor control. In particular, we consider the task of transforming a planned limb movement into an adequate set of motor commands. To carry out this task the CNS must solve a complex inverse dynamic problem. This problem involves the transformation from a desired motion to the forces that are needed to drive the limb. The inverse dynamic problem is a hard computational challenge because of the need to coordinate multiple limb segments and because of the continuous changes in the mechanical properties of the limbs and of the environment with which they come in contact. A number of studies of motor learning have provided support for the idea that the CNS creates, updates and exploits internal representations of limb dynamics in order to deal with the complexity of inverse dynamics. Here we discuss how such internal representations are likely to be built by combining the modular primitives in the spinal cord as well as other building blocks found in higher brain structures. Experimental studies on spinalized frogs and rats have led to the conclusion that the premotor circuits within the spinal cord are organized into a set of discrete modules. Each module, when activated, induces a specific force field and the simultaneous activation of multiple modules leads to the vectorial combination of the corresponding fields. We regard these force fields as computational primitives that are used by the CNS for generating a rich grammar of motor behaviours.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号