首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A brain-damaged patient (D.F.) with visual form agnosia is described and discussed. D.F. has a profound inability to recognize objects, places and people, in large part because of her inability to make perceptual discriminations of size, shape or orientation, despite having good visual acuity. Yet she is able to perform skilled actions that depend on that very same size, shape and orientation information that is missing from her perceptual awareness. It is suggested that her intact vision can best be understood within the framework of a dual processing model, according to which there are two cortical processing streams operating on different coding principles, for perception and for action, respectively. These may be expected to have different degrees of dependence on top-down information. One possibility is that D.F.''s lack of explicit awareness of the visual cues that guide her behaviour may result from her having to rely on a processing system which is not knowledge-based in a broad sense. Conversely, it may be that the perceptual system can provide conscious awareness of its products in normal individuals by virtue of the fact that it does interact with a stored base of visual knowledge.  相似文献   

2.
This review identifies a number of exciting new developments in the understanding of vision in cartilaginous fishes that have been made since the turn of the century. These include the results of studies on various aspects of the visual system including eye size, visual fields, eye design and the optical system, retinal topography and spatial resolving power, visual pigments, spectral sensitivity and the potential for colour vision. A number of these studies have covered a broad range of species, thereby providing valuable information on how the visual systems of these fishes are adapted to different environmental conditions. For example, oceanic and deep-sea sharks have the largest eyes amongst elasmobranchs and presumably rely more heavily on vision than coastal and benthic species, while interspecific variation in the ratio of rod and cone photoreceptors, the topographic distribution of the photoreceptors and retinal ganglion cells in the retina and the spatial resolving power of the eye all appear to be closely related to differences in habitat and lifestyle. Multiple, spectrally distinct cone photoreceptor visual pigments have been found in some batoid species, raising the possibility that at least some elasmobranchs are capable of seeing colour, and there is some evidence that multiple cone visual pigments may also be present in holocephalans. In contrast, sharks appear to have only one cone visual pigment. There is evidence that ontogenetic changes in the visual system, such as changes in the spectral transmission properties of the lens, lens shape, focal ratio, visual pigments and spatial resolving power, allow elasmobranchs to adapt to environmental changes imposed by habitat shifts and niche expansion. There are, however, many aspects of vision in these fishes that are not well understood, particularly in the holocephalans. Therefore, this review also serves to highlight and stimulate new research in areas that still require significant attention.  相似文献   

3.
While the different sensory modalities are sensitive to different stimulus energies, they are often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems.  相似文献   

4.
Cupiennius salei (Ctenidae) has been extensively studied for many years and is probably the only spider that presently can be considered a model organism for neuro-ethology. The night-active spiders have been shown to predominantly rely on their excellent mechano-sensory systems for courtship and prey capture, whereas vision was assumed to play a minor role, if any, in these behavioral contexts. Using slowly moving discs presented on a computer screen it could be shown for the first time that visual stimuli alone can elicit attack behavior (abrupt approaching reactions) in these spiders as well. These observations suggest that visual information could be used by the spiders to elicit and guide predatory behavior. Attack behavior in Cupiennius salei can thus be triggered independently by three sensory modalities—substrate vibrations, airflow stimuli, and visual cues—and offers an interesting model system to study the interactions of multimodal sensory channels in complex behavior.  相似文献   

5.
The visual system is the most studied sensory pathway, which is partly because visual stimuli have rather intuitive properties. There are reasons to think that the underlying principle ruling coding, however, is the same for vision and any other type of sensory signal, namely the code has to satisfy some notion of optimality--understood as minimum redundancy or as maximum transmitted information. Given the huge variability of natural stimuli, it would seem that attaining an optimal code is almost impossible; however, regularities and symmetries in the stimuli can be used to simplify the task: symmetries allow predicting one part of a stimulus from another, that is, they imply a structured type of redundancy. Optimal coding can only be achieved once the intrinsic symmetries of natural scenes are understood and used to the best performance of the neural encoder. In this paper, we review the concepts of optimal coding and discuss the known redundancies and symmetries that visual scenes have. We discuss in depth the only approach which implements the three of them known so far: translational invariance, scale invariance and multiscaling. Not surprisingly, the resulting code possesses features observed in real visual systems in mammals.  相似文献   

6.
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field.  相似文献   

7.
The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the detection and localisation of point-source bioluminescence at ecologically meaningful distances. At all depths, the eyes of animals active on and over the nutrient-rich sea floor are generally larger than the eyes of pelagic species. In fishes, the retinal ganglion cells are also frequently arranged in a horizontal visual streak, an adaptation for viewing the wide flat horizon of the sea floor, and all animals living there. These and many other aspects of light and vision in the deep sea are reviewed in support of the following conclusion: it is not only the intensity of light at different depths, but also its distribution in space, which has been a major force in the evolution of deep-sea vision.  相似文献   

8.
We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ±1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a “fixed” reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.  相似文献   

9.
Sensory reweighting is a characteristic of postural control functioning adopted to accommodate environmental changes. The use of mono or binocular cues induces visual reduction/increment of moving room influences on postural sway, suggesting a visual reweighting due to the quality of available sensory cues. Because in our previous study visual conditions were set before each trial, participants could adjust the weight of the different sensory systems in an anticipatory manner based upon the reduction in quality of the visual information. Nevertheless, in daily situations this adjustment is a dynamical process and occurs during ongoing movement. The purpose of this study was to examine the effect of visual transitions in the coupling between visual information and body sway in two different distances from the front wall of a moving room. Eleven young adults stood upright inside of a moving room in two distances (75 and 150 cm) wearing a liquid crystal lenses goggles, which allow individual lenses transition from opaque to transparent and vice-versa. Participants stood still during five minutes for each trial and the lenses status changed every one minute (no vision to binocular vision, no vision to monocular vision, binocular vision to monocular vision, and vice-versa). Results showed that farther distance and monocular vision reduced the effect of visual manipulation on postural sway. The effect of visual transition was condition dependent, with a stronger effect when transitions involved binocular vision than monocular vision. Based upon these results, we conclude that the increased distance from the front wall of the room reduced the effect of visual manipulation on postural sway and that sensory reweighting is stimulus quality dependent, with binocular vision producing a much stronger down/up-weighting than monocular vision.  相似文献   

10.
Flying animals need to accurately detect, identify and track fast-moving objects and these behavioral requirements are likely to strongly select for abilities to resolve visual detail in time. However, evidence of highly elevated temporal acuity relative to non-flying animals has so far been confined to insects while it has been missing in birds. With behavioral experiments on three wild passerine species, blue tits, collared and pied flycatchers, we demonstrate temporal acuities of vision far exceeding predictions based on the sizes and metabolic rates of these birds. This implies a history of strong natural selection on temporal resolution. These birds can resolve alternating light-dark cycles at up to 145 Hz (average: 129, 127 and 137, respectively), which is ca. 50 Hz over the highest frequency shown in any other vertebrate. We argue that rapid vision should confer a selective advantage in many bird species that are ecologically similar to the three species examined in our study. Thus, rapid vision may be a more typical avian trait than the famously sharp vision found in birds of prey.  相似文献   

11.
Binocular computer vision is based on bionics, after the calibration through the camera head by double-exposure image synchronization, access to the calculation of two-dimensional image pixels of the three-dimensional depth information. In this paper, a fast and robust stereo vision algorithm is described to perform in-vehicle obstacles detection and characterization. The stereo algorithm which provides a suitable representation of the geometric content of the road scene is described, and an in-vehicle embedded system is presented. We present the way in which the algorithm is used, and then report experiments on real situations which show that our solution is accurate, reliable and efficient. In particular, both processes are fast, generic, robust to noise and bad conditions, and work even with partial occlusion.  相似文献   

12.
Watching a speaker''s facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.  相似文献   

13.
A flexible calibration approach for line structured light vision system is proposed in this paper. Firstly a camera model is established by transforming the points from the 2D image plane to the world coordinate frame, and the intrinsic parameters of camera can be obtained accurately. Then a novel calibration method for structured light projector is presented by moving a planar target with a square pattern randomly, and the method mainly involves three steps: first, a simple linear model is proposed, by which the plane equation of the target at any orientations can be determined based on the square’s geometry information; second, the pixel coordinates of the light stripe center on the target images are extracted as the control points; finally, the points are projected into the camera coordinate frame with the help of the intrinsic parameters and the plane equations of the target, and the structured light plane can be determined by fitting these three-dimensional points. The experimental data show that the method has good repeatability and accuracy.  相似文献   

14.

Background

Vision loss causes major changes in lifestyle and habits that may result in psychological distress and further reduction in the quality of life. Little is known about the magnitude of psychological distress in patients with vision loss and its variation with the normal. The aim of this study is, therefore, to investigate the psychological effects of vision loss and its determinants among Ethiopians.

Methods

A comparative cross-sectional study was conducted on adults attending the Eye clinic of Jimma University Hospital. One hundred fifteen consecutive adults with visual loss at least in one eye and 115 age-and sex-matched controls with normal vision were studied. The psychological distress was measured using standardized Self-Reporting Questionnaire (SRQ-20). Chi-square test and logistic regression were carried out and associations were considered significant at P<0.05.

Results

The overall prevalence of psychological distress was 33.4%. While psychological distress was found in 49.8% of patients who had loss of vision at least in one eye, only 18.3% of the controls had it. In the adjusted analysis, patients with vision loss had 4.6 times higher risk of suffering from psychological distress compared to patients with normal vision (AOR 4.56; 95% CI 2.16-9.62). Moreover, patients with vision loss in both eyes (AOR 4.00; 95% CI 1.453-11.015) and with worse visual acuity in the better eye (AOR 3.66; 95% CI 1.27-10.54) were significantly more likely to have psychological distress than those patients with vision loss in one eye only and good visual acuity in the better eye respectively. The cause of visual loss, pattern of visual loss, duration of visual loss and sociodemographic variables did not influence the likelihood of having psychological distress.

Conclusion

Prevalence of psychological distress was significantly higher in patients with visual loss compared to patients with normal vision. There is a need for integration of psychosocial care into the current medical and surgical treatment of patients with vision loss.  相似文献   

15.
Among terrestrial animals, only vertebrates and arthropods possess wavelength-discrimination ability, so-called “color vision”. For color vision to exist, multiple opsins which encode visual pigments sensitive to different wavelengths of light are required. While the molecular evolution of opsins in vertebrates has been well investigated, that in arthropods remains to be elucidated. This is mainly due to poor information about the opsin genes of non-insect arthropods. To obtain an overview of the evolution of color vision in Arthropoda, we isolated three kinds of opsins, Rh1, Rh2, and Rh3, from two jumping spider species, Hasarius adansoni and Plexippus paykulli. These spiders belong to Chelicerata, one of the most distant groups from Hexapoda (insects), and have color vision as do insects. Phylogenetic analyses of jumping spider opsins revealed a birth and death process of color vision evolution in the arthropod lineage. Phylogenetic positions of jumping spider opsins revealed that at least three opsins had already existed before the Chelicerata-Pancrustacea split. In addition, sequence comparison between jumping spider Rh3 and the shorter wavelength-sensitive opsins of insects predicted that an opsin of the ancestral arthropod had the lysine residue responsible for UV sensitivity. These results strongly suggest that the ancestral arthropod had at least trichromatic vision with a UV pigment and two visible pigments. Thereafter, in each pancrustacean and chelicerate lineage, the opsin repertoire was reconstructed by gene losses, gene duplications, and function-altering amino acid substitutions, leading to evolution of color vision. Mitsumasa Koyanagi and Takashi Nagata contributed equally to this work. Sequence data from this article have been deposited with the DDBJ under accession nos. AB251846–AB251851.  相似文献   

16.
Vision and haptics have different limitations and advantages because they obtain information by different methods. If the brain combined information from the two senses optimally, it would rely more on the one providing more precise information for the current task. In this study, human observers judged the distance between two parallel surfaces in two within-modality experiments (vision-alone and haptics-alone) and in an intermodality experiment (vision and haptics together). In the within-modality experiments, the precision of visual estimates varied with surface orientation, as expected from geometric considerations; the precision of haptic estimates did not. An ideal observer that combines visual and haptic information weights them differently as a function of orientation. In the intermodality experiment, humans adjusted visual and haptic weights in a fashion quite similar to that of the ideal observer. As a result, combined size estimates are finer than is possible with either vision or haptics alone; indeed, they approach statistical optimality.  相似文献   

17.
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.  相似文献   

18.
Both dorsal and ventral cortical visual streams contain neurons sensitive to binocular disparities, but the two streams may underlie different aspects of stereoscopic vision. Here we investigate stereopsis in the neurological patient D.F., whose ventral stream, specifically lateral occipital cortex, has been damaged bilaterally, causing profound visual form agnosia. Despite her severe damage to cortical visual areas, we report that DF''s stereo vision is strikingly unimpaired. She is better than many control observers at using binocular disparity to judge whether an isolated object appears near or far, and to resolve ambiguous structure-from-motion. DF is, however, poor at using relative disparity between features at different locations across the visual field. This may stem from a difficulty in identifying the surface boundaries where relative disparity is available. We suggest that the ventral processing stream may play a critical role in enabling healthy observers to extract fine depth information from relative disparities within one surface or between surfaces located in different parts of the visual field.  相似文献   

19.
Vision screening was performed in over 11 000 16-year-olds who were taking part in the National Child Development Study. For distance vision 75% had normal acuity, 9% a minor defect, and 16% a more severe unilateral or bilateral defect. For near vision 85% had normal vision, 8% a minor defect, and 7% a unilateral or bilateral defect. Few children (62) with normal distant vision had defects in near vision, though many more (607) had both poor distant vision and poor near vision. Vision defects were more common in girls than in boys and occurred more often in adolescents from non-manual than manual families. Athough 18% of children had been prescribed glasses for current use, a third did not have their glasses available at the examination: 27% of the children prescribed glasses had normal unaided distant visual acuity or only a minor defect, and they constituted 42% of those who were not wearing their glasses. Further investigation is needed into the criteria on which glasses are prescribed for children and into the reasons for which they are not worn.  相似文献   

20.
Most conventional robots rely on controlling the location of the center of pressure to maintain balance, relying mainly on foot pressure sensors for information. By contrast, humans rely on sensory data from multiple sources, including proprioceptive, visual, and vestibular sources. Several models have been developed to explain how humans reconcile information from disparate sources to form a stable sense of balance. These models may be useful for developing robots that are able to maintain dynamic balance more readily using multiple sensory sources. Since these information sources may conflict, reliance by the nervous system on any one channel can lead to ambiguity in the system state. In humans, experiments that create conflicts between different sensory channels by moving the visual field or the support surface indicate that sensory information is adaptively reweighted. Unreliable information is rapidly down-weighted, then gradually up-weighted when it becomes valid again. Human balance can also be studied by building robots that model features of human bodies and testing them under similar experimental conditions. We implement a sensory reweighting model based on an adaptive Kalman filter in a bipedal robot, and subject it to sensory tests similar to those used on human subjects. Unlike other implementations of sensory reweighting in robots, our implementation includes vision, by using optic flow to calculate forward rotation using a camera (visual modality), as well as a three-axis gyro to represent the vestibular system (non-visual modality), and foot pressure sensors (proprioceptive modality). Our model estimates measurement noise in real time, which is then used to recompute the Kalman gain on each iteration, improving the ability of the robot to dynamically balance. We observe that we can duplicate many important features of postural sway in humans, including automatic sensory reweighting, effects, constant phase with respect to amplitude, and a temporal asymmetry in the reweighting gains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号