首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance.  相似文献   

3.
Research on the scope and limits of non-conscious vision can advance our understanding of the functional and neural underpinnings of visual awareness. Here we investigated whether distributed local features can be bound, outside of awareness, into coherent patterns. We used continuous flash suppression (CFS) to create interocular suppression, and thus lack of awareness, for a moving dot stimulus that varied in terms of coherence with an overall pattern (radial flow). Our results demonstrate that for radial motion, coherence favors the detection of patterns of moving dots even under interocular suppression. Coherence caused dots to break through the masks more often: this indicates that the visual system was able to integrate low-level motion signals into a coherent pattern outside of visual awareness. In contrast, in an experiment using meaningful or scrambled biological motion we did not observe any increase in the sensitivity of detection for meaningful patterns. Overall, our results are in agreement with previous studies on face processing and with the hypothesis that certain features are spatiotemporally bound into coherent patterns even outside of attention or awareness.  相似文献   

4.
Humans and other primates are equipped with a foveated visual system. As a consequence, we reorient our fovea to objects and targets in the visual field that are conspicuous or that we consider relevant or worth looking at. These reorientations are achieved by means of saccadic eye movements. Where we saccade to depends on various low-level factors such as a targets’ luminance but also crucially on high-level factors like the expected reward or a targets’ relevance for perception and subsequent behavior. Here, we review recent findings how the control of saccadic eye movements is influenced by higher-level cognitive processes. We first describe the pathways by which cognitive contributions can influence the neural oculomotor circuit. Second, we summarize what saccade parameters reveal about cognitive mechanisms, particularly saccade latencies, saccade kinematics and changes in saccade gain. Finally, we review findings on what renders a saccade target valuable, as reflected in oculomotor behavior. We emphasize that foveal vision of the target after the saccade can constitute an internal reward for the visual system and that this is reflected in oculomotor dynamics that serve to quickly and accurately provide detailed foveal vision of relevant targets in the visual field.  相似文献   

5.
The role of symmetry detection in early visual processing and the sensitivity of biological visual systems to symmetry across a wide range of organisms suggest that symmetry can be detected by low-level visual mechanisms. However, computational and functional considerations suggest that higher-level mechanisms may also play a role in facial symmetry detection. We tested this hypothesis by examining whether symmetry detection is better for faces than comparable patterns, which share low-level properties with faces. Symmetry detection was better for upright faces than for inverted faces (experiment 1) and contrast-reversed faces (experiment 2), implicating high-level mechanisms in facial symmetry detection. In addition, facial symmetry detection was sensitive to spatial scale, unlike low-level symmetry detection mechanisms (experiment 3), and showed greater sensitivity to a 45 degrees deviation from vertical than is found for other aspects of face perception (experiment 4). These results implicate specialized, higher-level mechanisms in the detection of facial symmetry. This specialization may reflect perceptual learning resulting from extensive experience detecting symmetry in faces or evolutionary selection pressures associated with the important role of facial symmetry in mate choice and 'mind-reading' or both.  相似文献   

6.
Embodied theories of cognition propose that neural substrates used in experiencing the referent of a word, for example perceiving upward motion, should be engaged in weaker form when that word, for example 'rise', is comprehended [1-3]. This claim has been broadly supported in the motor domain (for example [4,5]), whilst evidence is supportive, but less clear cut, for perception (for example [6-8]). Motivated by the finding that the perception of irrelevant background motion at near-threshold, but not supra-threshold, levels interferes with task execution [9], we assessed whether interference from near-threshold background motion was modulated by its congruence with the meaning of words (semantic content) when participants completed a lexical decision task (deciding if a string of letters is a real word or not). Reaction times for motion words, such as 'rise' or 'fall', were slower when the direction of visual motion and the 'motion' of the word were incongruent - but only when the visual motion was at near-threshold levels (supporting [9]). When motion was supra-threshold, the distribution of error rates, not reaction times, implicated low-level motion processing in the semantic processing of motion words. As the perception of near-threshold signals is not likely to be influenced by strategies [9], our results support a close contact between semantic information and perceptual systems.  相似文献   

7.
Axel Borchgrevink 《Ethnos》2013,78(2):223-244
This article demonstrates how the study of indigenous knowledge can be enhanced by paying attention to the forms in which this knowledge is organized and the way it is embedded in a wider cultural matrix. The empirical setting is a community of small-scale farmers on the Philippine island of Bohol, where much agricultural knowledge is organized in a cultural model built around a concept of cleanliness. The main part of the article is concerned with analyzing this symbolic model of cleanliness. I examine how it is applied within agriculture as well as in other domains; its esthetic, moral and practical dimensions; and how it can be said to embody a particular vision of the nature-culture opposition. In conclusion, I suggest that the cultural models approach may also facilitate the analysis of how indigenous knowledge changes over time.  相似文献   

8.
It is well known that the human postural control system responds to motion of the visual scene, but the implicit assumptions it makes about the visual environment and what quantities, if any, it estimates about the visual environment are unknown. This study compares the behavior of four models of the human postural control system to experimental data. Three include internal models that estimate the state of the visual environment, implicitly assuming its dynamics to be that of a linear stochastic process (respectively, a random walk, a general first-order process, and a general second-order process). In each case, all of the coefficients that describe the process are estimated by an adaptive scheme based on maximum likelihood. The fourth model does not estimate the state of the visual environment. It adjusts sensory weights to minimize the mean square of the control signal without making any specific assumptions about the dynamic properties of the environmental motion.We find that both having an internal model of the visual environment and its type make a significant difference in how the postural system responds to motion of the visual scene. Notably, the second-order process model outperforms the human postural system in its response to sinusoidal stimulation. Specifically, the second-order process model can correctly identify the frequency of the stimulus and completely compensate so that the motion of the visual scene has no effect on sway. In this case the postural control system extracts the same information from the visual modality as it does when the visual scene is stationary. The fourth model that does not simulate the motion of the visual environment is the only one that reproduces the experimentally observed result that, across different frequencies of sinusoidal stimulation, the gain with respect to the stimulus drops as the amplitude of the stimulus increases but the phase remains roughly constant. Our results suggest that the human postural control system does not estimate the state of the visual environment to respond to sinusoidal stimuli.  相似文献   

9.
The Adelson-Bergen motion energy sensor is well established as the leading model of low-level visual motion sensing in human vision. However, the standard model cannot predict adaptation effects in motion perception. A previous paper Pavan et al.(Journal of Vision 10:1–17, 2013) presented an extension to the model which uses a first-order RC gain-control circuit (leaky integrator) to implement adaptation effects which can span many seconds, and showed that the extended model’s output is consistent with psychophysical data on the classic motion after-effect. Recent psychophysical research has reported adaptation over much shorter time periods, spanning just a few hundred milliseconds. The present paper further extends the sensor model to implement rapid adaptation, by adding a second-order RC circuit which causes the sensor to require a finite amount of time to react to a sudden change in stimulation. The output of the new sensor accounts accurately for psychophysical data on rapid forms of facilitation (rapid visual motion priming, rVMP) and suppression (rapid motion after-effect, rMAE). Changes in natural scene content occur over multiple time scales, and multi-stage leaky integrators of the kind proposed here offer a computational scheme for modelling adaptation over multiple time scales.  相似文献   

10.
11.
By expanding on issues raised by D’Eath (1998), I address in this article three aspects of vision that are difficult to reproduce in the video- and computer-generated images used in experiments, in which images of conspecifics or of predators are replayed to animals. The lack of depth cues derived from binocular stereopsis, from accommodation, and from motion parallax may be one of the reasons why animals do not respond to video displays in the same way as they do to real conspecifics or to predators. Part of the problem is the difficulty of reproducing the closed-loop nature of natural vision in video playback experiments. Every movement an animal makes has consequences for the pattern of stimulation on its retina and this ”optic flow” in turn carries information about both the animal’s own movement and about the three-dimensional structure of the environment. A further critical issue is the behavioural context that often determines what animals attend to but that may be difficult to induce or reproduce in an experimental setting. I illustrate this point by describing some visual behaviours in fiddler crabs, in which social and spatial context define which part of the visual field a crab attends to and which visual information is used to guide behaviour. I finally mention some aspects of natural illumination that may influence how animals perceive an object or a scene: shadows, specular reflections, and polarisation reflections. Received: 23 November 1999 / Received in revised form: 9 February 2000 / Accepted: 10 February 2000  相似文献   

12.
In the motion aftereffect (MAE), a stationary pattern appears to move in the opposite direction to previously viewed motion. Here we report an MAE that is observed for a putatively high level of visual analysis-attentive tracking. These high-level MAEs, visible on dynamic (but not static) tests, suggest that attentive tracking does not simply enhance low-level motion signals but, rather, acts at a subsequent stage. MAEs from tracking (1) can overrule competing MAEs from adaptation to low-level motion, (2) can be established opposite to low-level MAEs seen on static tests at the same location, and (3), most striking, are specific to the overall direction of object motion, even at nonadapted locations. These distinctive properties suggest MAEs from attentive tracking can serve as valuable probes for understanding the mechanisms of high-level vision and attention.  相似文献   

13.
View from the top: hierarchies and reverse hierarchies in the visual system   总被引:33,自引:0,他引:33  
Hochstein S  Ahissar M 《Neuron》2002,36(5):791-804
We propose that explicit vision advances in reverse hierarchical direction, as shown for perceptual learning. Processing along the feedforward hierarchy of areas, leading to increasingly complex representations, is automatic and implicit, while conscious perception begins at the hierarchy's top, gradually returning downward as needed. Thus, our initial conscious percept--vision at a glance--matches a high-level, generalized, categorical scene interpretation, identifying "forest before trees." For later vision with scrutiny, reverse hierarchy routines focus attention to specific, active, low-level units, incorporating into conscious perception detailed information available there. Reverse Hierarchy Theory dissociates between early explicit perception and implicit low-level vision, explaining a variety of phenomena. Feature search "pop-out" is attributed to high areas, where large receptive fields underlie spread attention detecting categorical differences. Search for conjunctions or fine discriminations depends on reentry to low-level specific receptive fields using serial focused attention, consistent with recently reported primary visual cortex effects.  相似文献   

14.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

15.
16.
The recognition that animals sense the world in a different way than we do has unlocked important lines of research in ecology and evolutionary biology. In practice, the subjective study of natural stimuli has been permitted by perceptual spaces, which are graphical models of how stimuli are perceived by a given animal. Because colour vision is arguably the best‐known sensory modality in most animals, a diversity of colour spaces are now available to visual ecologists, ranging from generalist and basic models allowing rough but robust predictions on colour perception, to species‐specific, more complex models giving accurate but context‐dependent predictions. Selecting among these models is most often influenced by historical contingencies that have associated models to specific questions and organisms; however, these associations are not always optimal. The aim of this review is to provide visual ecologists with a critical perspective on how models of colour space are built, how well they perform and where their main limitations are with regard to their most frequent uses in ecology and evolutionary biology. We propose a classification of models based on their complexity, defined as whether and how they model the mechanisms of chromatic adaptation and receptor opponency, the nonlinear association between the stimulus and its perception, and whether or not models have been fitted to experimental data. Then, we review the effect of modelling these mechanisms on predictions of colour detection and discrimination, colour conspicuousness, colour diversity and diversification, and for comparing the perception of colour traits between distinct perceivers. While a few rules emerge (e.g. opponent log–linear models should be preferred when analysing very distinct colours), in general model parameters still have poorly known effects. Colour spaces have nonetheless permitted significant advances in ecology and evolutionary biology, and more progress is expected if ecologists compare results between models and perform behavioural experiments more routinely. Such an approach would further contribute to a better understanding of colour vision and its links to the behavioural ecology of animals. While visual ecology is essentially a transfer of knowledge from visual sciences to evolutionary ecology, we hope that the discipline will benefit both fields more evenly in the future.  相似文献   

17.
18.
19.
Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging), a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI), we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.  相似文献   

20.
The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号