首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Close behavioural coupling of visual orientation may provide a range of adaptive benefits to social species. In order to investigate the natural properties of gaze-following between pedestrians, we displayed an attractive stimulus in a frequently trafficked corridor within which a hidden camera was placed to detect directed gaze from passers-by. The presence of visual cues towards the stimulus by nearby pedestrians increased the probability of passers-by looking as well. In contrast to cueing paradigms used for laboratory research, however, we found that individuals were more responsive to changes in the visual orientation of those walking in the same direction in front of them (i.e. viewing head direction from behind). In fact, visual attention towards the stimulus diminished when oncoming pedestrians had previously looked. Information was therefore transferred more effectively behind, rather than in front of, gaze cues. Further analyses show that neither crowding nor group interactions were driving these effects, suggesting that, within natural settings gaze-following is strongly mediated by social interaction and facilitates acquisition of environmentally relevant information.  相似文献   

2.
The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects'' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects'' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.  相似文献   

3.
Animals can attempt to reduce uncertainty about their environment by gathering information personally or by observing others' interactions with the environment. There are several sensory modalities that can be used to transmit social information from chemical to visual to audible cues. When predation risk is variable, visual cues of conspecific behavior might be especially telling about the presence of a potential threat; however, most studies couple visual and chemical cues together. Here, we tested whether visual behavioral cues from frightened conspecifics were sufficient to indirectly transfer information about the presence of an unseen predator in three‐spined sticklebacks. Our results demonstrate that visual behavioral cues from conspecifics about the presence of a predator are sufficient to induce an antipredator response. This suggests that information transfer can occur rapidly in the absence of chemical cues and that some individuals weigh social information more heavily than others.  相似文献   

4.
Fang F  He S 《Neuron》2005,45(5):793-800
Are there neurons representing specific views of objects in the human visual system? A visual selective adaptation method was used to address this question. After visual adaptation to an object viewed either 15 or 30 degrees from one side, when the same object was subsequently presented near the frontal view, the perceived viewing directions were biased in a direction opposite to that of the adapted viewpoint. This aftereffect can be obtained with spatially nonoverlapping adapting and test stimuli, and it depends on the global representation of the adapting stimuli. Viewpoint aftereffects were found within, but not across, categories of objects tested (faces, cars, wire-like objects). The magnitude of this aftereffect depends on the angular difference between the adapting and test viewing angles and grows with increasing duration of adaptation. These results support the existence of object-selective neurons tuned to specific viewing angles in the human visual system.  相似文献   

5.
The orientation behaviour of bats (Phyllostomus discolor, Phyllostomidae), flying inside an octagonal roost-like chamber (ø: 100cm; h: 150cm) was examined.It has been shown that the bats begin turning manoeuvres during flight by turning their head towards the direction they intend to proceed to. During early phases of the flights, cumulative navigation errors were evident, indicating that endogenous spatial information plays a major role in the orientation of the bats. During later phases of the flight this error is diminished again. So it can be concluded that the bats start to use exogenous spatial information for orientation while approaching the target.In order to investigate the relative importance of vision, echolocation and endogenous spatial information for approaching the roost, the landing lattices inside the test arena were changed for non-grid dummies. We found that: 1. combined visual and endogenous information are more important than echoacoustical cues, 2. the bats learned quickly to switch their orientation behaviour in order to get a better performance in avoiding the dummies, 3. the learning performance was influenced by the visual similarity of dummies and the real landing lattice.  相似文献   

6.
How do animals determine when others are able and disposed to receive their communicative signals? In particular, it is futile to make a silent gesture when the intended audience cannot see it. Some non-human primates use the head and body orientation of their audience to infer visual attentiveness when signalling, but whether species relying less on visual information use such cues when producing visual signals is unknown. Here, we test whether African elephants (Loxodonta africana) are sensitive to the visual perspective of a human experimenter. We examined whether the frequency of gestures of head and trunk, produced to request food, was influenced by indications of an experimenter''s visual attention. Elephants signalled significantly more towards the experimenter when her face was oriented towards them, except when her body faced away from them. These results suggest that elephants understand the importance of visual attention for effective communication.  相似文献   

7.
Following adaptation to faces with contracted (or expanded) internal features, faces previously perceived as normal appear distorted in the opposite direction. This figural face aftereffect suggests face-coding mechanisms adapt to changes in the spatial relations of features and/or the global structure of faces. Here, we investigated whether the figural aftereffect requires spatial attention. Participants ignored a distorted adapting face and performed a highly demanding letter-count task. Before and after adaptation, participants rated the normality of morphed distorted faces ranging from 50% contracted through undistorted to 50% expanded. A robust aftereffect was observed. These results suggest that the figural face aftereffect can occur in the absence of spatial attention, even when the attentional demands of the relevant task are high.  相似文献   

8.
The sophisticated analysis of gestures and vocalizations, including assessment of their emotional valence, helps group-living primates efficiently navigate their social environment. Deficits in social information processing and emotion regulation are important components of many human psychiatric illnesses, such as autism, schizophrenia and social anxiety disorder. Analyzing the neurobiology of social information processing and emotion regulation requires a multidisciplinary approach that benefits from comparative studies of humans and animal models. However, many questions remain regarding the relationship between visual attention and arousal while processing social stimuli. Using noninvasive infrared eye-tracking methods, we measured the visual social attention and physiological arousal (pupil diameter) of adult male rhesus monkeys (Macaca mulatta) as they watched social and nonsocial videos. We found that social videos, as compared to nonsocial videos, captured more visual attention, especially if the social signals depicted in the videos were directed towards the subject. Subject-directed social cues and nonsocial nature documentary footage, compared to videos showing conspecifics engaging in naturalistic social interactions, generated larger pupil diameters (indicating heightened sympathetic arousal). These findings indicate that rhesus monkeys will actively engage in watching videos of various kinds. Moreover, infrared eye tracking technology provides a mechanism for sensitively gauging the social interest of presented stimuli. Adult male rhesus monkeys' visual attention and physiological arousal do not always trend in the same direction, and are likely influenced by the content and novelty of a particular visual stimulus. This experiment creates a strong foundation for future experiments that will examine the neural network responsible for social information processing in nonhuman primates. Such studies may provide valuable information relevant to interpreting the neural deficits underlying human psychiatric illnesses such as autism, schizophrenia and social anxiety disorder.  相似文献   

9.
How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.  相似文献   

10.
For humans, social cues often guide the focus of attention. Although many nonhuman primates, like humans, live in large, complex social groups, the extent to which human and nonhuman primates share fundamental mechanisms of social attention remains unexplored. Here, we show that, when viewing a rhesus macaque looking in a particular direction, both rhesus macaques and humans reflexively and covertly orient their attention in the same direction. Specifically, when performing a peripheral visual target detection task, viewing a monkey with either its eyes alone or with both its head and eyes averted to one side facilitated the detection of peripheral targets when they randomly appeared on the same side. Moreover, viewing images of a monkey with averted gaze evoked small but systematic shifts in eye position in the direction of gaze in the image. The similar magnitude and temporal dynamics of response facilitation and eye deviation in monkeys and humans suggest shared neural circuitry mediating social attention.  相似文献   

11.
Of the many hand gestures that we use in communication pointing is one of the most common and powerful in its role as a visual referent that directs joint attention. While numerous studies have examined the developmental trajectory of pointing production and comprehension, very little consideration has been given to adult visual perception of hand pointing gestures. Across two studies, we use a visual adaptation paradigm to explore the mechanisms underlying the perception of proto-declarative hand pointing. Twenty eight participants judged whether 3D modeled hands pointed, in depth, at or to the left or right of a target (test angles of 0°, 0.75° and 1.5° left and right) before and after adapting to either hands or arrows which pointed 10° to the right or left of the target. After adaptation, the perception of the pointing direction of the test hands shifted with respect to the adapted direction, revealing separate mechanisms for coding right and leftward pointing directions. While there were subtle yet significant differences in the strength of adaptation to hands and arrows, both cues gave rise to a similar pattern of aftereffects. The considerable cross category adaptation found when arrows were used as adapting stimuli and the asymmetry in aftereffects to left and right hands suggests that the adaptation aftereffects are likely driven by simple orientation cues, inherent in the morphological structure of the hand, and not dependent on the biological status of the hand pointing cue. This finding provides evidence in support of a common neural mechanism that processes these directional social cues, a mechanism that may be blind to the biological status of the stimulus category.  相似文献   

12.
Several recent demonstrations using visual adaptation have revealed high-level aftereffects for complex patterns including faces. While traditional aftereffects involve perceptual distortion of simple attributes such as orientation or colour that are processed early in the visual cortical hierarchy, face adaptation affects perceived identity and expression, which are thought to be products of higher-order processing. And, unlike most simple aftereffects, those involving faces are robust to changes in scale, position and orientation between the adapting and test stimuli. These differences raise the question of how closely related face aftereffects are to traditional ones. Little is known about the build-up and decay of the face aftereffect, and the similarity of these dynamic processes to traditional aftereffects might provide insight into this relationship. We examined the effect of varying the duration of both the adapting and test stimuli on the magnitude of perceived distortions in face identity. We found that, just as with traditional aftereffects, the identity aftereffect grew logarithmically stronger as a function of adaptation time and exponentially weaker as a function of test duration. Even the subtle aspects of these dynamics, such as the power-law relationship between the adapting and test durations, closely resembled that of other aftereffects. These results were obtained with two different sets of face stimuli that differed greatly in their low-level properties. We postulate that the mechanisms governing these shared dynamics may be dissociable from the responses of feature-selective neurons in the early visual cortex.  相似文献   

13.
When confronted with complex visual scenes in daily life, how do we know which visual information represents our own hand? We investigated the cues used to assign visual information to one''s own hand. Wrist tendon vibration elicits an illusory sensation of wrist movement. The intensity of this illusion attenuates when the actual motionless hand is visually presented. Testing what kind of visual stimuli attenuate this illusion will elucidate factors contributing to visual detection of one''s own hand. The illusion was reduced when a stationary object was shown, but only when participants knew it was controllable with their hands. In contrast, the visual image of their own hand attenuated the illusion even when participants knew that it was not controllable. We suggest that long-term knowledge about the appearance of the body and short-term knowledge about controllability of a visual object are combined to robustly extract our own body from a visual scene.  相似文献   

14.
目的 眼睛注视、头朝向和生物运动方向等社会性线索,对人类的生存和社会交互极为重要.由于社会性线索和外周线索都具有反射性注意定向这一特点,社会性注意往往也被认为属于外源性注意.但是,外源性注意并不能完全解释所有的社会性注意现象.因此,两者是否具有相同的加工机制,尚存在争论.方法 本研究使用空间线索范式,系统考察了线索有效...  相似文献   

15.
Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja’s Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.  相似文献   

16.
Fang F  He S 《Current biology : CB》2004,14(3):247-251
3D structures can be perceived based on the patterns of 2D motion signals. With orthographic projection of a 3D stimulus onto a 2D plane, the kinetic information can give a vivid impression of depth, but the depth order is intrinsically ambiguous, resulting in bistable or even multistable interpretations. For example, an orthographic projection of dots on the surface of a rotating cylinder is perceived as a rotating cylinder with ambiguous direction of rotation. We show that the bistable rotation can be stabilized by adding information, not to the dots themselves, but to their spatial context. More interestingly, the stabilized bistable motion can generate consistent rotation aftereffects. The rotation aftereffect can only be observed when the adapting and test stimuli are presented at the same stereo depth and the same retinal location, and it is not due to attentional tracking. The observed rotation aftereffect is likely due to direction-contingent disparity adaptation, implying that stimuli with kinetic depth may have activated neurons sensitive to different disparities, even though the stimuli have zero relative disparity. Stereo depth and kinetic depth may be supported by a common neural mechanism at an early stage in the visual system.  相似文献   

17.
For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes.  相似文献   

18.
Group foraging has been suggested as an important factor for the evolution of sociality. However, visual cues are predominantly used to gain information about group members'' foraging success in diurnally foraging animals such as birds, where group foraging has been studied most intensively. By contrast, nocturnal animals, such as bats, would have to rely on other cues or signals to coordinate foraging. We investigated the role of echolocation calls as inadvertently produced cues for social foraging in the insectivorous bat Noctilio albiventris. Females of this species live in small groups, forage over water bodies for swarming insects and have an extremely short daily activity period. We predicted and confirmed that (i) free-ranging bats are attracted by playbacks of echolocation calls produced during prey capture, and that (ii) bats of the same social unit forage together to benefit from passive information transfer via the change in group members'' echolocation calls upon finding prey. Network analysis of high-resolution automated radio telemetry confirmed that group members flew within the predicted maximum hearing distance 94±6 per cent of the time. Thus, echolocation calls also serve as intraspecific communication cues. Sociality appears to allow for more effective group foraging strategies via eavesdropping on acoustical cues of group members in nocturnal mammals.  相似文献   

19.
Animals are capable of enhanced decision making through cooperation, whereby accurate decisions can occur quickly through decentralized consensus. These interactions often depend upon reliable social cues, which can result in highly coordinated activities in uncertain environments. Yet information within a crowd may be lost in translation, generating confusion and enhancing individual risk. As quantitative data detailing animal social interactions accumulate, the mechanisms enabling individuals to rapidly and accurately process competing social cues remain unresolved. Here, we model how motion-guided attention influences the exchange of visual information during social navigation. We also compare the performance of this mechanism to the hypothesis that robust social coordination requires individuals to numerically limit their attention to a set of n-nearest neighbours. While we find that such numerically limited attention does not generate robust social navigation across ecological contexts, several notable qualities arise from selective attention to motion cues. First, individuals can instantly become a local information hub when startled into action, without requiring changes in neighbour attention level. Second, individuals can circumvent speed–accuracy trade-offs by tuning their motion thresholds. In turn, these properties enable groups to collectively dampen or amplify social information. Lastly, the minority required to sway a group''s short-term directional decisions can change substantially with social context. Our findings suggest that motion-guided attention is a fundamental and efficient mechanism underlying collaborative decision making during social navigation.  相似文献   

20.
Dogs' ability to recognise cues of human visual attention was studied in different experiments. Study 1 was designed to test the dogs' responsiveness to their owner's tape-recorded verbal commands (Down!) while the Instructor (who was the owner of the dog) was facing either the dog or a human partner or none of them, or was visually separated from the dog. Results show that dogs were more ready to follow the command if the Instructor attended them during instruction compared to situations when the Instructor faced the human partner or was out of sight of the dog. Importantly, however, dogs showed intermediate performance when the Instructor was orienting into 'empty space' during the re-played verbal commands. This suggests that dogs are able to differentiate the focus of human attention. In Study 2 the same dogs were offered the possibility to beg for food from two unfamiliar humans whose visual attention (i.e. facing the dog or turning away) was systematically varied. The dogs' preference for choosing the attentive person shows that dogs are capable of using visual cues of attention to evaluate the human actors' responsiveness to solicit food-sharing. The dogs' ability to understand the communicatory nature of the situations is discussed in terms of their social cognitive skills and unique evolutionary history.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号