首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.  相似文献   

2.
Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.  相似文献   

3.
We present a novel “Gaze-Replay” paradigm that allows the experimenter to directly test how particular patterns of visual input—generated from people’s actual gaze patterns—influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze “donor.” This was intended to simulate the donor’s visual selection, such that a participant could effectively view scenes “through the eyes” of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition.  相似文献   

4.
During sentence production, linguistic information (semantics, syntax, phonology) of words is retrieved and assembled into a meaningful utterance. There is still debate on how we assemble single words into more complex syntactic structures such as noun phrases or sentences. In the present study, event-related potentials (ERPs) were used to investigate the time course of syntactic planning. Thirty-three volunteers described visually animated scenes using naming formats varying in syntactic complexity: from simple words (‘W’, e.g., “triangle”, “red”, “square”, “green”, “to fly towards”), to noun phrases (‘NP’, e.g., “the red triangle”, “the green square”, “to fly towards”), to a sentence (‘S’, e.g., “The red triangle flies towards the green square.”). Behaviourally, we observed an increase in errors and corrections with increasing syntactic complexity, indicating a successful experimental manipulation. In the ERPs following scene onset, syntactic complexity variations were found in a P300-like component (‘S’/‘NP’>‘W’) and a fronto-central negativity (linear increase with syntactic complexity). In addition, the scene could display two actions - unpredictable for the participant, as the disambiguation occurred only later in the animation. Time-locked to the moment of visual disambiguation of the action and thus the verb, we observed another P300 component (‘S’>‘NP’/‘W’). The data show for the first time evidence of sensitivity to syntactic planning within the P300 time window, time-locked to visual events critical of syntactic planning. We discuss the findings in the light of current syntactic planning views.  相似文献   

5.
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours.  相似文献   

6.
Stockmanship is a term used to describe the management of animals with a good stockperson someone who does this in a in a safe, effective, and low-stress manner for both the stock-keeper and animals involved. Although impacts of unfamiliar zoo visitors on animal behaviour have been extensively studied, the impact of stockmanship i.e familiar zoo keepers is a new area of research; which could reveal significant ramifications for zoo animal behaviour and welfare. It is likely that different relationships are formed dependant on the unique keeper-animal dyad (human-animal interaction, HAI). The aims of this study were to (1) investigate if unique keeper-animal dyads were formed in zoos, (2) determine whether keepers differed in their interactions towards animals regarding their attitude, animal knowledge and experience and (3) explore what factors affect keeper-animal dyads and ultimately influence animal behaviour and welfare. Eight black rhinoceros (Diceros bicornis), eleven Chapman’s zebra (Equus burchellii), and twelve Sulawesi crested black macaques (Macaca nigra) were studied in 6 zoos across the UK and USA. Subtle cues and commands directed by keepers towards animals were identified. The animals latency to respond and the respective behavioural response (cue-response) was recorded per keeper-animal dyad (n = 93). A questionnaire was constructed following a five-point Likert Scale design to record keeper demographic information and assess the job satisfaction of keepers, their attitude towards the animals and their perceived relationship with them. There was a significant difference in the animals’ latency to appropriately respond after cues and commands from different keepers, indicating unique keeper-animal dyads were formed. Stockmanship style was also different between keepers; two main components contributed equally towards this: “attitude towards the animals” and “knowledge and experience of the animals”. In this novel study, data demonstrated unique dyads were formed between keepers and zoo animals, which influenced animal behaviour.  相似文献   

7.
Psychological and neural distinctions between the technical concepts of “liking” and “wanting” pose important problems for motivated choice for goods. Why could we “want” something that we do not “like,” or “like” something but be unwilling to exert effort to acquire it? Here, we suggest a framework for answering these questions through the medium of reinforcement learning. We consider “liking” to provide immediate, but preliminary and ultimately cancellable, information about the true, long-run worth of a good. Such initial estimates, viewed through the lens of what is known as potential-based shaping, help solve the temporally complex learning problems faced by animals.

What is the distinction between ’liking’ and ’wanting’? Why could we ’want’ something that we do not ’like,’ or ’like’ something but be unwilling to exert effort to acquire it? This Essay argues that the primary hedonic phenomenon called ’liking’ might solve the temporal credit assignment problem for learning that arises when true reinforcement values are available slowly or late.  相似文献   

8.
Desert ants, foraging in cluttered semiarid environments, are thought to be visually guided along individual, habitual routes. While other navigational mechanisms (e.g. path integration) are well studied, the question of how ants extract reliable visual features from a complex visual scene is still largely open. This paper explores the assumption that the upper outline of ground objects formed against the sky, i.e. the skyline, provides sufficient information for visual navigation. We constructed a virtual model of the ant’s environment. In the virtual environment, panoramic images were recorded and adapted to the resolution of the desert ant’s complex eye. From these images either a skyline code or a pixel-based intensity code were extracted. Further, two homing algorithms were implemented, a modified version of the average landmark vector (ALV) model (Lambrinos et al. Robot Auton Syst 30:39–64, 2000) and a gradient ascent method. Results show less spatial aliasing for skyline coding and best homing performance for ALV homing based on skyline codes. This supports the assumption of skyline coding in visual homing of desert ants and allows novel approaches to technical outdoor navigation.  相似文献   

9.
Feminist news media researchers have long contended that masculine news values shape journalists’ quotidian decisions about what is newsworthy. As a result, it is argued, topics and issues traditionally regarded as primarily of interest and relevance to women are routinely marginalised in the news, while men’s views and voices are given privileged space. When women do show up in the news, it is often as “eye candy,” thus reinforcing women’s value as sources of visual pleasure rather than residing in the content of their views. To date, evidence to support such claims has tended to be based on small-scale, manual analyses of news content. In this article, we report on findings from our large-scale, data-driven study of gender representation in online English language news media. We analysed both words and images so as to give a broader picture of how gender is represented in online news. The corpus of news content examined consists of 2,353,652 articles collected over a period of six months from more than 950 different news outlets. From this initial dataset, we extracted 2,171,239 references to named persons and 1,376,824 images resolving the gender of names and faces using automated computational methods. We found that males were represented more often than females in both images and text, but in proportions that changed across topics, news outlets and mode. Moreover, the proportion of females was consistently higher in images than in text, for virtually all topics and news outlets; women were more likely to be represented visually than they were mentioned as a news actor or source. Our large-scale, data-driven analysis offers important empirical evidence of macroscopic patterns in news content concerning the way men and women are represented.  相似文献   

10.
Humans and animals recover their sense of position and orientation using properties of the surface layout, but the processes underlying this ability are disputed. Although behavioral and neurophysiological experiments on animals long have suggested that reorientation depends on representations of surface distance, recent experiments on young children join experimental studies and computational models of animal navigation to suggest that reorientation depends either on processing of any continuous perceptual variables or on matching of 2D, depthless images of the landscape. We tested the surface distance hypothesis against these alternatives through studies of children, using environments whose 3D shape and 2D image properties were arranged to enhance or cancel impressions of depth. In the absence of training, children reoriented by subtle differences in perceived surface distance under conditions that challenge current models of 2D-image matching or comparison processes. We provide evidence that children’s spontaneous navigation depends on representations of 3D layout geometry.  相似文献   

11.
Lightness illusions are fundamental to human perception, and yet why we see them is still the focus of much research. Here we address the question by modelling not human physiology or perception directly as is typically the case but our natural visual world and the need for robust behaviour. Artificial neural networks were trained to predict the reflectance of surfaces in a synthetic ecology consisting of 3-D “dead-leaves” scenes under non-uniform illumination. The networks learned to solve this task accurately and robustly given only ambiguous sense data. In addition—and as a direct consequence of their experience—the networks also made systematic “errors” in their behaviour commensurate with human illusions, which includes brightness contrast and assimilation—although assimilation (specifically White's illusion) only emerged when the virtual ecology included 3-D, as opposed to 2-D scenes. Subtle variations in these illusions, also found in human perception, were observed, such as the asymmetry of brightness contrast. These data suggest that “illusions” arise in humans because (i) natural stimuli are ambiguous, and (ii) this ambiguity is resolved empirically by encoding the statistical relationship between images and scenes in past visual experience. Since resolving stimulus ambiguity is a challenge faced by all visual systems, a corollary of these findings is that human illusions must be experienced by all visual animals regardless of their particular neural machinery. The data also provide a more formal definition of illusion: the condition in which the true source of a stimulus differs from what is its most likely (and thus perceived) source. As such, illusions are not fundamentally different from non-illusory percepts, all being direct manifestations of the statistical relationship between images and scenes.  相似文献   

12.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.  相似文献   

13.
The ability to detect sudden changes in the environment is critical for survival. Hearing is hypothesized to play a major role in this process by serving as an “early warning device,” rapidly directing attention to new events. Here, we investigate listeners'' sensitivity to changes in complex acoustic scenes—what makes certain events “pop-out” and grab attention while others remain unnoticed? We use artificial “scenes” populated by multiple pure-tone components, each with a unique frequency and amplitude modulation rate. Importantly, these scenes lack semantic attributes, which may have confounded previous studies, thus allowing us to probe low-level processes involved in auditory change perception. Our results reveal a striking difference between “appear” and “disappear” events. Listeners are remarkably tuned to object appearance: change detection and identification performance are at ceiling; response times are short, with little effect of scene-size, suggesting a pop-out process. In contrast, listeners have difficulty detecting disappearing objects, even in small scenes: performance rapidly deteriorates with growing scene-size; response times are slow, and even when change is detected, the changed component is rarely successfully identified. We also measured change detection performance when a noise or silent gap was inserted at the time of change or when the scene was interrupted by a distractor that occurred at the time of change but did not mask any scene elements. Gaps adversely affected the processing of item appearance but not disappearance. However, distractors reduced both appearance and disappearance detection. Together, our results suggest a role for neural adaptation and sensitivity to transients in the process of auditory change detection, similar to what has been demonstrated for visual change detection. Importantly, listeners consistently performed better for item addition (relative to deletion) across all scene interruptions used, suggesting a robust perceptual representation of item appearance.  相似文献   

14.
Accurately encoding time is one of the fundamental challenges faced by the nervous system in mediating behavior. We recently reported that some animals have a specialized population of rhythmically active neurons in their olfactory organs with the potential to peripherally encode temporal information about odor encounters. If these neurons do indeed encode the timing of odor arrivals, it should be possible to demonstrate that this capacity has some functional significance. Here we show how this sensory input can profoundly influence an animal’s ability to locate the source of odor cues in realistic turbulent environments—a common task faced by species that rely on olfactory cues for navigation. Using detailed data from a turbulent plume created in the laboratory, we reconstruct the spatiotemporal behavior of a real odor field. We use recurrence theory to show that information about position relative to the source of the odor plume is embedded in the timing between odor pulses. Then, using a parameterized computational model, we show how an animal can use populations of rhythmically active neurons to capture and encode this temporal information in real time, and use it to efficiently navigate to an odor source. Our results demonstrate that the capacity to accurately encode temporal information about sensory cues may be crucial for efficient olfactory navigation. More generally, our results suggest a mechanism for extracting and encoding temporal information from the sensory environment that could have broad utility for neural information processing.  相似文献   

15.
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.  相似文献   

16.
The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species.  相似文献   

17.
Many animal taxa have been shown to possess the ability of true navigation. In this study we investigated the possibilities for geomagnetic bi‐coordinate map navigation in different regions of the earth by analysing angular differences between isolines of geomagnetic total intensity and inclination. In ‘no‐grid’ zones where isolines were running almost parallel, efficient geomagnetic bi‐coordinate navigation would probably not be feasible. These zones formed four distinct areas with a north‐south extension in the northern hemisphere, whereas the pattern in the southern hemisphere was more diffuse. On each side of these zones there was often a mirror effect where identical combinations of the geomagnetic parameters appeared. This may potentially cause problems for species migrating long distances east‐west across longitudes, since they may pass areas with identical geomagnetic coordinates. Migration routes assumed for four populations of migratory passerine birds were used to illustrate the possibilities of geomagnetic bi‐coordinate map navigation along different routes. We conclude that it is unlikely that animal navigation is universally based on a geomagnetic bi‐coordinate map mechanism only, and we predict that the relative importance of geomagnetic coordinate information differs between animals, areas and routes, depending on the different conditions for bi‐coordinate geomagnetic navigation in different regions of the earth.  相似文献   

18.
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability.  相似文献   

19.
Empathy allows us to understand and react to other people''s feelings and sensations; we can more accurately judge another person’s situation when we are aware of his/her emotions. Empathy for pain is a good working model of the behavioral and neural processes involved in empathy in general. Although the influence of perspective-taking processes (notably "Self" vs. "Other") on pain rating has been studied, the impact of the degree of familiarity with the person representing the “Other” perspective has not been previously addressed. In the present study, we asked participants to adopt four different perspectives: "Self", "Other-Most-Loved-Familiar", "Other-Most-Hated-Familiar" and "Other-Stranger". The results showed that higher pain ratings were attributed to the Other-Most-Loved-Familiar perspective than to the Self, Other-Stranger and Other-Most-Hated-Familiar perspectives. Moreover, participants were quicker to rate pain for the Other-Most-Loved-Familiar perspective and the Self-perspective than for the other two perspectives. These results for a perspective-taking task therefore more clearly define the role of familiarity in empathy for pain.  相似文献   

20.
Artificial wombs are already in development that have the potential to radically alter how we perceive the developing fetus and the role of pregnancy in society. That this technology would allow greater visibility of gestation than ever before also highlights the risk that artificial wombs will be used to further restrict women’s reproductive liberty and access to abortion. This article uses Paul Lauritzen’s theory of “visual bioethics” to explore the ethical significance of images of the developing fetus and how artificial wombs might best be visually designed and integrated into society.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号