首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The roles of different afferent systems in the organization of an internal reference frame was studied. The task of visual comparison was performed by subjects under different experimental conditions: in the upright standing position and with the body or head inclined in the frontal plane and with the visual information about an external environment available or not available. It was shown that the dominant orientation of a referent stimulus (the minimum value of the mean error in the reproduction of the stimulus and the minimal variability of the error) was correlated with the body position, mainly the position of the head, more than with the gravitational or visual vertical, even when the visual information was available. This means that the proprioceptive information about the longitudinal axis of body, rather than gravity, is mainly used by the central nervous system for creating the internal representing of vertical during standing.  相似文献   

2.
The subjective visual vertical (SVV) and the subjective haptic vertical (SHV) both claim to probe the underlying perception of gravity. However, when the body is roll tilted these two measures evoke different patterns of errors with SVV generally becoming biased towards the body (A-effect, named for its discoverer, Hermann Rudolph Aubert) and SHV remaining accurate or becoming biased away from the body (E-effect, short for Entgegengesetzt-effect, meaning “opposite”, i.e., opposite to the A-effect). We compared the two methods in a series of five experiments and provide evidence that the two measures access two different but related estimates of gravitational vertical. Experiment 1 compared SVV and SHV across three levels of whole-body tilt and found that SVV showed an A-effect at larger tilts while SHV was accurate. Experiment 2 found that tilting either the head or the trunk independently produced an A-effect in SVV while SHV remained accurate when the head was tilted on an upright body but showed an A-effect when the body was tilted below an upright head. Experiment 3 repeated these head/body configurations in the presence of vestibular noise induced by using disruptive galvanic vestibular stimulation (dGVS). dGVS abolished both SVV and SHV A-effects while evoking a massive E-effect in the SHV head tilt condition. Experiments 4 and 5 show that SVV and SHV do not combine in an optimally statistical fashion, but when vibration is applied to the dorsal neck muscles, integration becomes optimal. Overall our results suggest that SVV and SHV access distinct underlying gravity percepts based primarily on head and body position information respectively, consistent with a model proposed by Clemens and colleagues.  相似文献   

3.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

4.
Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible.  相似文献   

5.
Birds gather information about their environment mainly through vision by scanning their surroundings. Many prevalent models of social foraging assume that foraging and scanning are mutually exclusive. Although this assumption is valid for birds with narrow visual fields, these models have also been applied to species with wide fields. In fact, available models do not make precise predictions for birds with large visual fields, in which the head-up, head-down dichotomy is not accurate and, moreover, do not consider the effects of detection distance and limited attention. Studies of how different types of visual information are acquired as a function of body posture and of how information flows within flocks offer new insights into the costs and benefits of living in groups.  相似文献   

6.

Background

Relatively little is known about the degree of inter-specific variability in visual scanning strategies in species with laterally placed eyes (e.g., birds). This is relevant because many species detect prey while perching; therefore, head movement behavior may be an indicator of prey detection rate, a central parameter in foraging models. We studied head movement strategies in three diurnal raptors belonging to the Accipitridae and Falconidae families.

Methodology/Principal Findings

We used behavioral recording of individuals under field and captive conditions to calculate the rate of two types of head movements and the interval between consecutive head movements. Cooper''s Hawks had the highest rate of regular head movements, which can facilitate tracking prey items in the visually cluttered environment they inhabit (e.g., forested habitats). On the other hand, Red-tailed Hawks showed long intervals between consecutive head movements, which is consistent with prey searching in less visually obstructed environments (e.g., open habitats) and with detecting prey movement from a distance with their central foveae. Finally, American Kestrels have the highest rates of translational head movements (vertical or frontal displacements of the head keeping the bill in the same direction), which have been associated with depth perception through motion parallax. Higher translational head movement rates may be a strategy to compensate for the reduced degree of eye movement of this species.

Conclusions

Cooper''s Hawks, Red-tailed Hawks, and American Kestrels use both regular and translational head movements, but to different extents. We conclude that these diurnal raptors have species-specific strategies to gather visual information while perching. These strategies may optimize prey search and detection with different visual systems in habitat types with different degrees of visual obstruction.  相似文献   

7.
The central program of a targeted movement includes a component intended for to compensate for the weight of the arm; this is why the accuracy of pointing to a memorized position of the visual target in darkness depends on orientation of the moving limb in relation to the vertical axis. Transition from the vertical to the horizontal body position is accompanied by a shift of the final hand position along the body axis towards the head. We studied how pointing errors and visual localization of the target are modified due to adaptation to the horizontal body position; targeted movements to a real target were repeatedly performed during the adaptation period. Three types of experiments were performed: a basic experiment, and two different experiments with adaptation realized under somewhat dissimilar conditions. In the course of the first adaptation experiment, subjects received no visual information on the hand’s position in space, and targeted movements of the arm to a luminous target could be corrected using proprioceptive information only. With such a paradigm, the accuracy of pointing to memorized visual targets showed no adaptation-related changes. In the second adaptation experiment, subjects were allowed to continuously view a marker (a light-emitting diode taped to the fingertip). After such adaptation practice, the accuracy of pointing movements to memorized targets increased: both constant and variational errors, as well as both components of constant error (i.e.,X andY errors) significantly dropped. Testing the accuracy of visual localization of the targets by visual/verbal adjustment, performed after this adaptation experiment, showed that the pattern of errors did not change compared with that in the basic experiment. Therefore, we can conclude that sensorimotor adaptation to the horizontal position develops much more successfully when the subject obtains visual information about the working point position; such adaptation is not related to modifications in the system of visual localization of the target.  相似文献   

8.
Sensing is often implicitly assumed to be the passive acquisition of information. However, part of the sensory information is generated actively when animals move. For instance, humans shift their gaze actively in a sequence of saccades towards interesting locations in a scene. Likewise, many insects shift their gaze by saccadic turns of body and head, keeping their gaze fixed between saccades. Here we employ a novel panoramic virtual reality stimulator and show that motion computation in a blowfly visual interneuron is tuned to make efficient use of the characteristic dynamics of retinal image flow. The neuron is able to extract information about the spatial layout of the environment by utilizing intervals of stable vision resulting from the saccadic viewing strategy. The extraction is possible because the retinal image flow evoked by translation, containing information about object distances, is confined to low frequencies. This flow component can be derived from the total optic flow between saccades because the residual intersaccadic head rotations are small and encoded at higher frequencies. Information about the spatial layout of the environment can thus be extracted by the neuron in a computationally parsimonious way. These results on neuronal function based on naturalistic, behaviourally generated optic flow are in stark contrast to conclusions based on conventional visual stimuli that the neuron primarily represents a detector for yaw rotations of the animal.  相似文献   

9.
Alignment of the body to the gravitational vertical is considered to be the key to human bipedalism. However, changes to the semicircular canals during human evolution suggest that the sense of head rotation that they provide is important for modern human bipedal locomotion. When walking, the canals signal a mix of head rotations associated with path turns, balance perturbations, and other body movements. It is uncertain how the brain uses this information. Here, we show dual roles for the semicircular canals in balance control and navigation control. We electrically evoke a head-fixed virtual rotation signal from semicircular canal nerves as subjects walk in the dark with their head held in different orientations. Depending on head orientation, we can either steer walking by "remote control" or produce balance disturbances. This shows that the brain resolves the canal signal according to head posture into Earth-referenced orthogonal components and uses rotations in vertical planes to control balance and rotations in the horizontal plane to navigate. Because the semicircular canals are concerned with movement rather than detecting vertical alignment, this result shows the importance of movement control and agility rather than precise vertical alignment of the body for human bipedalism.  相似文献   

10.
In healthy subjects in the relaxed upward stance and perceiving a virtual visual environment (VVE), we recorded postural reactions to isolated visual and vestibular stimulations or their combinations. Lateral displacements of the visualized virtual scene were used as visual stimuli. The vestibular apparatus was stimulated by application of near-threshold galvanic current pulses to the proc. mastoidei of the temporal bones. Isolated VVE shifts evoked mild, nonetheless clear, body tilts readily distinguished in separate trials; at the same time, postural effects of isolated vestibular stimulation could be detected only after averaging of several trials synchronized with respect to the beginning of stimulation. Under conditions of simultaneous combined presentation of visual and vestibular stimuli, the direction of the resulting postural responses always corresponded to the direction of responses induced by VVE shifts. The contribution of an afferent volley from the vestibular organ depended on the coincidence/mismatch of the direction of motor response evoked by such a volley with the direction of response to visual stimulation. When both types of stimulations evoked unidirectional body tilts, postural responses were facilitated, and the resulting effect was greater than that of simple summation of the reactions to isolated actions of the above stimuli. In the case where isolated galvanic stimulation evoked a response opposite with respect to that induced by visual stimulation, the combined action of these stimuli of different modalities evoked postural responses identical in their magnitude, direction, and shape to those evoked by isolated visual stimulation. The above findings allow us to conclude that the effects of visual afferent input on the vertical posture under conditions of our experiments clearly dominate. In general, these results confirm the statement that neuronal structures involved in integrative processing of different afferent volleys preferably select certain type of afferentation carrying more significant or more detailed information on displacements (including oscillations) of the body in space.  相似文献   

11.
In this study, we analysed the eye movements of flatfish for body tilting and compared with that of goldfish. The fish was fixed on the tilting table controlled by computer. The eye movements for body tilting along the different body axis were video-recorded. The vertical and torsional eye rotations were analysed frame by frame. In normal flatfish, vertical eye movement of left eye to leftward tilting was larger than that to rightward tilting. For head up or head down tilting, clear vertical eye movements were observed. On the other hand, torsional eye movements showed similar characteristics as goldfish. These results suggested that sacculus and lagena were important for otolith-ocular eye movements in flatfish.  相似文献   

12.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

13.
Hystrix javanica is endemic species in Indonesia. Study about fetal development of Hystrix javanica are very rare because of sample limitation. This study was carried out to describe the morphometrics and x-ray analysis of three fetuses in different stage to give basic information about fetal development of Hystrix javanica. Three fetus samples fixed in Bouin’s solution was used in this study. Observation was carried out to identify the characteristic of three fetus samples. This included the pattern of hair, body measurements, body volume, and body weight. X-ray analysis was carried out to know the ossification process in the fetal development. Statistical analysis was carried out using Microsoft 365® Excel program software. Three fetus samples had different specific hair pattern, that was hairless, smooth hairs, and smooth hairs with dense-non dense pattern. Body volume of 1st, 2nd, and 3rd fetus were 23mL, 90mL, and 170mL, respectively. Body weight of 1st, 2nd, and 3rd fetus were 19.5g, 79.22g, and 153.18g, respectively. Pearson’s correlation analysis shown strong relationship between total body length, front body length, back body length, horizontal body diameter, vertical body diameter, head length, and head diameter against body volume and body weight of three fetuses. Significant positive correlation was shown between horizontal body diameter, vertical body diameter, and head diameter against body volume and body length with P value < 0.05. Faint radiopaque images showed in the 2nd fetus sample and strong radiopaque images showed in the 3rd fetus sample. Radiopaque images were identified in the teeth, cranium, vertebrae, and extremities bones. In this study we concluded that there was a specific hair pattern in different fetal stage. All body measurements have positive correlation with body volume and body weight and x-ray analysis shown that the ossification of the bone was started to happen while the smooth hair was growth.  相似文献   

14.
 Gull-billed terns (Gelochelidon nilotica) were video-filmed while searching for and capturing fiddler crabs. Search consists of a vertical head nystagmus, with fast upward flicks and downward slow phases made at the angular speed of the substrate in the approximate direction of the bill. The bill points down at about 60° during hunting, but is brought up to 15° from time to time, which brings the visual streak into line with the horizon; 45° roll movements of the head are consistent with alternation between the use of the temporal and central foveas to view the same object. When a crab has been detected the nystagmus is suspended, and the tern tracks the crab continuously as it manoeuvres into a catching position. This may involve tucking the head under the body so that the bill is 45° behind the vertical, and flying up and backwards for some metres, straightening up the head at the same time. Accepted: 7 November 1998  相似文献   

15.
ABSTRACT. Freely walking crickets were filmed from above during their visual orientation towards a black stripe. A frame-by-frame analysis enabled head and body movements to be recorded. The animals walk in 200ms bouts (runs) separated by pauses of similar duration. During each run, rotations of the body axis are observed and some corrections of the course direction occur between successive runs. Generally, the crickets do not walk straight ahead but slightly sideways. Because no lateral head movements were observed during visually orientated locomotion, retinal scanning results from both rotations of the body axis and translation of the head. While walking, one of the target edges is maintained by the cricket on a relatively limited area of the retina, generally between 10 and 25 laterally. Thus, the cricket often records three pieces of information about each edge: one in the monocular visual field, and two in the binocular visual field. Nevertheless, between two pauses, the images of each edge shift asymmetrically on the retinae. Such movement could prevent receptor adaptation by modulation of the ommatidial excitation, or by stimulation of the neighbouring ommatidia. It is also suggested that antennal movements are influenced by the positions of the visually fixated target edges.  相似文献   

16.
ABSTRACT. Males of Gomphocerus rufus L. perform a courtship song consisting of repetitive units, each of which is composed of three subunits (S1, S2, S3). S1 is characterized mainly by slow and fast head rolling; S2 and S3 are distinguished by different types of leg-stridulation. These movements and the associated sounds were recorded during presentation of visual stimuli, either linear displacement of a living female or optomotor stimuli generated by a striped drum. Females moved artificially through the binocular visual field of a courting male with a velocity of 1 cm/s or more are mounted by the male from any subunit S1, S2 or S3, although under natural conditions mounting occurs only from S2. Thus above a critical velocity the courtship programme can be modified. Rotation of a striped drum about the yaw axis of the male during the slow S1 induces asymmetrical leg position, following movements of the head, and prolongation of S1. During S2 the male is especially sensitive to optomotor stimuli and responds with marked changes in body position. In S3 the intensity of the song is reduced, and its duration shortened. Fast drum movements interrupt the courtship programme. Rotation of the drum about the roll axis elicits optomotor head turning that interferes with the head rolling of S1. The fast phase of S1 and the frequency of head-rolling during S1 cannot be modified by optomotor stimulation. The results can be interpreted by assuming certain interactions between three central nervous elements: a calling-song generator, a head-rolling generator, and an optomotor centre.  相似文献   

17.
Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.  相似文献   

18.
19.
An otolith organ on ground behave as a detector of both gravity and linear acceleration, and play an important role in controlling posture and eye movement for tilt of the head or translational motion. On the other hand, a gravitational acceleration ingredient to an otolith organ disappears in microgravity environment. However, linear acceleration can be received by otolith organ and produce a sensation that is different from that on Earth. It is suggested that in microgravity signal from the otolith organ may cause abnormality of posture control and eye movement. Therefore, the central nervous system may re-interprets all output from the otolith organ to indicate linear motion. A study of eye movement has been done a lot as one of a reflection related to an otolith organ system. In this study, we examined function of otolith organ in goldfish revealed from analysis of eye movement induced by linear acceleration or the tilt of body. We analyzed both torsional and vertical eye movements from video images frame by frame. For tilting stimulation, torsional eye movements induced by head down was larger than that induced by head up for larger tilt angle than 30 degrees. In the case of linear acceleration below 0.4 G, however, no clear differences were observed in both torsional and vertical eye movement. These results suggest that body tilt and linear acceleration may not be with equivalent stimulation to cause eye movement on the ground.  相似文献   

20.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号