首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It has recently been shown that some non-human animals can cross-modally recognize members of their own taxon. What is unclear is just how plastic this recognition system can be. In this study, we investigate whether an animal, the domestic horse, is capable of spontaneous cross-modal recognition of individuals from a morphologically very different species. We also provide the first insights into how cross-modal identity information is processed by examining whether there are hemispheric biases in this important social skill. In our preferential looking paradigm, subjects were presented with two people and playbacks of their voices to determine whether they were able to match the voice with the person. When presented with familiar handlers subjects could match the specific familiar person with the correct familiar voice. Horses were significantly better at performing the matching task when the congruent person was standing on their right, indicating marked hemispheric specialization (left hemisphere bias) in this ability. These results are the first to demonstrate that cross-modal recognition in animals can extend to individuals from phylogenetically very distant species. They also indicate that processes governed by the left hemisphere are central to the cross-modal matching of visual and auditory information from familiar individuals in a naturalistic setting.  相似文献   

2.
Adachi I  Hampton RR 《PloS one》2011,6(8):e23345
Rhesus monkeys gather much of their knowledge of the social world through visual input and may preferentially represent this knowledge in the visual modality. Recognition of familiar faces is clearly advantageous, and the flexibility and utility of primate social memory would be greatly enhanced if visual memories could be accessed cross-modally either by visual or auditory stimulation. Such cross-modal access to visual memory would facilitate flexible retrieval of the knowledge necessary for adaptive social behavior. We tested whether rhesus monkeys have cross-modal access to visual memory for familiar conspecifics using a delayed matching-to-sample procedure. Monkeys learned visual matching of video clips of familiar individuals to photographs of those individuals, and generalized performance to novel videos. In crossmodal probe trials, coo-calls were played during the memory interval. The calls were either from the monkey just seen in the sample video clip or from a different familiar monkey. Even though the monkeys were trained exclusively in visual matching, the calls influenced choice by causing an increase in the proportion of errors to the picture of the monkey whose voice was heard on incongruent trials. This result demonstrates spontaneous cross-modal recognition. It also shows that viewing videos of familiar monkeys activates naturally formed memories of real monkeys, validating the use of video stimuli in studies of social cognition in monkeys.  相似文献   

3.
Individuals face evolutionary trade-offs between the acquisition of costly but accurate information gained firsthand and the use of inexpensive but possibly less reliable social information. American crows (Corvus brachyrhynchos) use both sources of information to learn the facial features of a dangerous person. We exposed wild crows to a novel 'dangerous face' by wearing a unique mask as we trapped, banded and released 7-15 birds at five study sites near Seattle, WA, USA. An immediate scolding response to the dangerous mask after trapping by previously captured crows demonstrates individual learning, while an immediate response by crows that were not captured probably represents conditioning to the trapping scene by the mob of birds that assembled during the capture. Later recognition of dangerous masks by lone crows that were never captured is consistent with horizontal social learning. Independent scolding by young crows, whose parents had conditioned them to scold the dangerous mask, demonstrates vertical social learning. Crows that directly experienced trapping later discriminated among dangerous and neutral masks more precisely than did crows that learned through social means. Learning enabled scolding to double in frequency and spread at least 1.2 km from the place of origin over a 5 year period at one site.  相似文献   

4.
Cooperatively breeding birds typically form cohesive and stable groups that live year‐round in all purpose territories where competition for resources is likely to arise. Understanding how group members negotiate over resources is crucial because conflicts may disrupt the stability of the group and may ultimately hinder cooperation. However, social relationships within the group have been largely neglected so far. Here we investigated how cooperatively breeding carrion crows (Corvus corone corone) share a food source, by observing dyadic interactions in 29 territories that contained retained offspring of the breeding pair and/or immigrants. We found that crows formed linear and stable dominance hierarchies, which were stronger for males than females. We suggest that this difference mirrors the level of competition for resources other than food, such as reproduction and territory inheritance, which is higher in males than females. Interestingly, immigrant males dominated male offspring, suggesting that, for the resident breeder, which is the alpha member of the group, the benefits of an association with an immigrant overcome the costs of having his sons pushed down in the hierarchy. Our study uncovered the key factors that determine hierarchical relationships among cooperatively breeding crows and highlighted the need of focussing on social interactions in every context of group living to fully explain the dynamic of cooperation at the nest.  相似文献   

5.
The recognition of individuals is a basic cognitive ability of social animals. A prerequisite for individual recognition is distinct characteristics that can be used to distinguish between other conspecific individuals. Studies of birds have shown that visual information, such as colour patterning, is used in individual recognition. However, in the case of monochromatic birds, colour patterning cannot be used to identify individuals. Therefore, we expected that the configuration of facial features, such as the shape of the bills or eyes, may have enough individuality to permit individual recognition in such species. In this study, we aimed to clarify visible individual differences in the facial configuration of large-billed crows (Corvus macrorhynchos). Specifically, we analysed the profile pictures of 16 crows. We measured 26 variables in 20 pictures of each bird and then performed principal component analysis and discriminant function analysis. The results showed that the configuration of the facial profiles was individually distinct, but re-classification by discriminant functions implied that it did not clearly differ between sexes. These results suggest that crows may be able to recognise individuals on the basis of the individuality of facial configuration, even in the absence of any conspicuous colour patterning.  相似文献   

6.
How predators locate avian nests is poorly understood and has been subjected to little experimental inquiry. We examined which sensory stimuli were important in the nest-finding behavior of fish crows Corvus ossifragus , a common nest predator in the southeastern United States. Using an array of potted trees in a large enclosure, we presented artificial nests to captive crows and quantified responses to visual, auditory, and olfactory nest cues, and nest position. Partial ranks of nest-treatment preferences were analyzed using log-linear models. Nest visibility significantly increased the likelihood of predation by fish crows and increasing nest height was a marginally significant influence on nest vulnerability; no responses were apparent to auditory or olfactory stimuli. Our findings demonstrate that fish crows are visually-oriented nest predators that may preferentially prey on, or more readily encounter, above-ground nests. Moreover, the experimental design provides a new method for evaluating predator-prey interactions between nests and their predators. This study also illustrates how sensory capabilities of predators can interact with nest types to determine nest predation patterns.  相似文献   

7.
Sense of agency, the experience of controlling external events through one''s actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers'' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant''s action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.  相似文献   

8.
Tinnitus is the perception of sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. In experiment 1, we used a go/no-go paradigm to evaluate the target detection speed and the inhibitory control in tinnitus participants (TP) and control subjects (CS), both in unimodal and bimodal conditions in the auditory and visual modalities. We also tested whether the sound frequency used for target and distractors affected the performance. We observed that TP were slower and made more false alarms than CS in all unimodal auditory conditions. TP were also slower than CS in the bimodal conditions. In addition, when comparing the response times in bimodal and auditory unimodal conditions, the expected gain in bimodal conditions was present in CS, but not in TP when tinnitus-matched frequency sounds were used as targets. In experiment 2, we tested the sensitivity to cross-modal interference in TP during auditory and visual go/no-go tasks where each stimulus was preceded by an irrelevant pre-stimulus in the untested modality (e.g. high frequency auditory pre-stimulus in visual no/no-go condition). We observed that TP had longer response times than CS and made more false alarms in all conditions. In addition, the highest false alarm rate occurred in TP when tinnitus-matched/high frequency sounds were used as pre-stimulus. We conclude that the inhibitory control is altered in TP and that TP are abnormally sensitive to cross-modal interference, reflecting difficulties to ignore irrelevant stimuli. The fact that the strongest interference effect was caused by tinnitus-like auditory stimulation is consistent with the hypothesis according to which such stimulations generate emotional responses that affect cognitive processing in TP. We postulate that executive functions deficits play a key-role in the perception and maintenance of tinnitus.  相似文献   

9.
Recently, Kitagawa and Ichihara (2002) demonstrated that visual adaptation to an expanding or contracting disk produces a cross-modal visually-induced auditory loudness aftereffect (VALAE), which they attributed to cross-correlations of motion in three-dimensional space. Our experiments extend their results by providing evidence that attending selectively to one of two competing visual stimuli of the same saliency produces a cross-modal VALAE that favors the attended stimulus. These cross-modal attentional effects suggest the existence of integrative spatial mechanisms between vision and audition that are affected by attention.  相似文献   

10.
It was shown that a large set of training stimuli promotes abstract concept learning. These experiments were designed to assess whether an application of a large set of training stimuli would facilitate matching learning in crows. Four hooded crows were trained with a set of 72 unique combinations of stimuli in two-alternative simultaneous matching tasks with stimuli of three different categories: achromatic color (white, light-grey, dark-grey, and black), shape (Arabic numerals from 1 to 4 used as visual shapes only), and number of elements (heterogeneous graphic arrays from 1 to 4 items). Although the performance of all crows was significantly above chance (p < 0.01) in some 72-trial blocks, birds were unable to establish matching and to reach the criterion of learning 80% correct or better over 72 consecutive trials) in 5184 trials. Thus, the modified training procedure was less efficient than the training technique previously used (successive cyclic repetition of three small sets of training stimuli), which allowed four of six crows to acquire the matching rule after 1780, 2360, 3830, and 5260 trials [4,9].  相似文献   

11.
In many cooperatively breeding societies, helping effort varies greatly among group members, raising the question of why dominant individuals tolerate lazy subordinates. In groups of carrion crows Corvus corone corone, helpers at the nest increase breeders'' reproductive success, but chick provisioning is unevenly distributed among non-breeders, with a gradient that ranges from individuals that work as much as the breeders to others that completely refrain from visiting the nest. Here we show that lazy non-breeders represent an insurance workforce that fully compensates for a reduction in the provisioning effort of another group member, avoiding a decrease in reproductive success. When we temporarily impaired a carer, decreasing its nest attendance, the laziest non-breeders increased their provisioning rate and individuals that initially refrained from visiting the nest started helping. Breeders, in contrast, did not increase chick provisioning. This shows that lazy non-breeders can buffer a sudden unfavourable circumstance and suggests that group stability relies on the potential contribution of group members in addition to their current effort.  相似文献   

12.
The simultaneity of signals from different senses—such as vision and audition—is a useful cue for determining whether those signals arose from one environmental source or from more than one. To understand better the sensory mechanisms for assessing simultaneity, we measured the discrimination thresholds for time intervals marked by auditory, visual or auditory–visual stimuli, as a function of the base interval. For all conditions, both unimodal and cross-modal, the thresholds followed a characteristic ‘dipper function’ in which the lowest thresholds occurred when discriminating against a non-zero interval. The base interval yielding the lowest threshold was roughly equal to the threshold for discriminating asynchronous from synchronous presentations. Those lowest thresholds occurred at approximately 5, 15 and 75 ms for auditory, visual and auditory–visual stimuli, respectively. Thus, the mechanisms mediating performance with cross-modal stimuli are considerably slower than the mechanisms mediating performance within a particular sense. We developed a simple model with temporal filters of different time constants and showed that the model produces discrimination functions similar to the ones we observed in humans. Both for processing within a single sense, and for processing across senses, temporal perception is affected by the properties of temporal filters, the outputs of which are used to estimate time offsets, correlations between signals, and more.  相似文献   

13.

Background

The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception.

Methodology/Principal Findings

Here, we tested the hypothesis that visual sensitivity - rather than only the perceived duration of visual stimuli - can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d’) for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60–96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60–96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants.

Conclusions/Significance

Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does.  相似文献   

14.
Jungle crows (Corvus macrorhynchos) flexibly change their social forms depending on their age, time of the day, and the season. In the daytime, paired adults behave territorially and unpaired subadults form small flocks of ten birds, whereas at night hundreds of birds roost together. In the breeding season, pairings remain in their nest all day. This fission-fusion raises questions about the underlying social structure and the cognitive capability of jungle crows. In this study, dyadic encounters were used to investigate dominance relationships (linear or non-linear) and the underlying mechanisms in captive jungle crows. Fourteen crows were tested in 455 encounters (i.e., 5 encounters per dyad), and a stable linear dominance relationship emerged. Sex and aggressiveness were determinants as individual characteristics for dominance formation. Males dominated females, and more aggressive individuals dominated less aggressive ones. Aggressive interactions in dyads occurred primarily during the first encounter and drastically declined during subsequent encounters without any signs of a confidence effect. These results suggest that, in captive jungle crow, a linear form of dominance is intrinsically determined by sex and aggressiveness and maintained extrinsically by memories of past outcomes associated with specific individuals, implying individual recognition.  相似文献   

15.
Mismatch negativity of ERP in cross-modal attention   总被引:1,自引:0,他引:1  
Event-related potentials were measured in 12 healthy youth subjects aged 19-22 using the paradigm "cross-modal and delayed response" which is able to improve unattended purity and to avoid the effect of task target on the deviant components of ERP. The experiment included two conditions: (i) Attend visual modality, ignore auditory modality; (ii) attend auditory modality, ignore visual modality. The stimuli under the two conditions were the same. The difference wave was obtained by subtracting ERPs of the standard stimuli from that of the deviant stim-uli. The present results showed that mismatch negativity (MMN), N2b and P3 components can be produced in the auditory and visual modalities under attention condition. However, only MMN was observed in the two modalities un-der inattention condition. Auditory and visual MMN have some features in common: their largest MMN wave peaks were distributed respectively over their primary sensory projection areas of the scalp under attention condition, but over front  相似文献   

16.
Transitive responding in humans and non-human animals has attracted considerable attention because of its presumably inferential nature. In an attempt to replicate our earlier study with crows [Lazareva, O.F., Smirnova, A.A., Bagozkaja, M.S., Zorina, Z.A., Rayevsky, V.V., Wasserman, E.A., 2004. Transitive responding in hooded crows requires linearly ordered stimuli. J. Exp. Anal. Behav. 82, 1-19], we trained pigeons to discriminate overlapping pairs of colored squares (A+ B-, B+ C-, C+ D-, and D+ E-). For some birds, the colored squares, or primary stimuli, were followed by a circle of the same color (feedback stimuli) whose diameter decreased from A to E (Ordered Feedback group); these circles were made available to help order the stimuli along a physical dimension. For other birds, all of the feedback stimuli had the same diameter (Constant Feedback group). In later testing, novel choice pairs were presented, including the critical BD pair. The pigeons' reinforcement history with Stimuli B and D was controlled, so that the birds should not have chosen Stimulus B during the BD test. Unlike the crows, the pigeons selected Stimulus B over Stimulus D in both ordered and Constant Feedback groups, suggesting that the orderability of the post-choice feedback stimuli did not affect pigeons' transitive responding. Post hoc simulations showed that associative models [Wynne, C.D.L., 1995. Reinforcement accounts for transitive inference (TI) performance. Anim. Learn. Behav. 23, 207-217; Siemann, M., Delius, J.D., 1998. Algebraic learning and neural network models for transitive and non-transitive responding. Eur. J. Cogn. Psychol. 10, 307-334] failed to predict pigeons' responding in the BD test.  相似文献   

17.
Social relationships in domestic fowl are commonly assumed to rely on social recognition and its pre-requisite, discrimination of group-mates. If this is true, then the unnatural physical and social environments in which commercial laying hens are typically housed, when compared with those in which their progenitor species evolved, may compromise social function with consequent implications for welfare. Our aims were to determine whether adult hens can discriminate between unique pairs of familiar conspecifics, and to establish the most appropriate method for assessing this social discrimination. We investigated group-mate discrimination using two learning tasks in which there was bi-directional exchange of visual, auditory and olfactory information. Learning occurred in a Y-maze task (p < 0.003; n = 7/8) but not in an operant key-pecking task (p = 0.001; n = 1/10). A further experiment with the operant-trained hens examined whether failure was specific to the group-mate social discrimination or to the response task. Learning also failed to occur in this familiar/unfamiliar social discrimination task (p = 0.001; n = 1/10). Our findings demonstrate unequivocally that adult laying hens kept in small groups, under environmental conditions more consistent with those in which sensory capacities evolved, can discriminate group members: however, appropriate methods to demonstrate discrimination are crucial.  相似文献   

18.
Temporal information is often contained in multi-sensory stimuli, but it is currently unknown how the brain combines e.g. visual and auditory cues into a coherent percept of time. The existing studies of cross-modal time perception mainly support the "modality appropriateness hypothesis", i.e. the domination of auditory temporal cues over visual ones because of the higher precision of audition for time perception. However, these studies suffer from methodical problems and conflicting results. We introduce a novel experimental paradigm to examine cross-modal time perception by combining an auditory time perception task with a visually guided motor task, requiring participants to follow an elliptic movement on a screen with a robotic manipulandum. We find that subjective duration is distorted according to the speed of visually observed movement: The faster the visual motion, the longer the perceived duration. In contrast, the actual execution of the arm movement does not contribute to this effect, but impairs discrimination performance by dual-task interference. We also show that additional training of the motor task attenuates the interference, but does not affect the distortion of subjective duration. The study demonstrates direct influence of visual motion on auditory temporal representations, which is independent of attentional modulation. At the same time, it provides causal support for the notion that time perception and continuous motor timing rely on separate mechanisms, a proposal that was formerly supported by correlational evidence only. The results constitute a counterexample to the modality appropriateness hypothesis and are best explained by Bayesian integration of modality-specific temporal information into a centralized "temporal hub".  相似文献   

19.
Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.  相似文献   

20.
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号