首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The goal of this study was to compare the performance of a chimpanzee and humans on auditory-visual intermodal matching of conspecifics and non-conspecifics. The task consisted of matching vocal samples to facial images of the corresponding vocalizers. We tested the chimpanzee and human subjects with both chimpanzee and human stimuli to assess the involvement of species-specificity in the recognition process. All subjects were highly familiar with the stimuli. The chimpanzee subject, named Pan, had had extensive previous experience in auditory-visual intermodal matching tasks. We found clear evidence of a species-specific effect: the chimpanzee and human subjects both performed better at recognizing conspecifics than non-conspecifics. Our results suggest that Pan's early exposure to human caretakers did not seem to favor a perceptual advantage in better discriminating familiar humans compared to familiar conspecifics. The results also showed that Pan's recognition of non-conspecifics did not significantly improve over the course of the experiment. In contrast, human subjects learned to better discriminate non-conspecific stimuli, suggesting that the processing of recognition might differ across species. Nevertheless, this comparative study demonstrates that species-specificity significantly affects intermodal individual recognition of highly familiar individuals in both chimpanzee and human subjects.  相似文献   

2.

Background

An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.

Methodology/Principal Findings

Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.

Conclusions/Significance

The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.  相似文献   

3.
EE Birkett  JB Talcott 《PloS one》2012,7(8):e42820
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.  相似文献   

4.
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands.  相似文献   

5.
The modality of a stimulus and its intermittency affect time estimation. The present experiment explores the effect of a combination of modality and intermittency, and its implications for internal clock explanations. Twenty-four participants were tested on a temporal bisection task with durations of 200-800 ms. Durations were signaled by visual steady stimuli, auditory steady stimuli, visual flickering stimuli, and auditory clicks. Psychophysical functions and bisection points indicated that the durations of visual steady stimuli were classified as shorter and more variable than the durations signaled by the auditory stimuli (steady and clicks), and that the durations of the visual flickering stimuli were classified as longer than the durations signaled by the auditory stimuli (steady and clicks). An interpretation of the results is that there are different speeds for the internal clock, which are mediated by the perceptual features of the stimuli timed, such as differences in time of processing.  相似文献   

6.
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.  相似文献   

7.
The current study examined how five chimpanzees combined the signs of American Sign Language with their nonverbal communication during high arousal interactions. Thirty-five hours of videotape were analyzed for the presence of high arousal interactions. Similar to deaf children, the chimpanzees signed to one another during high arousal interactions, and they emphatically modulated their signs by signing more vigorously, enlarging the sign’s movement, prolonging the sign, reiterating the sign, or by using a two-handed version of a sign regularly signed with one hand. The majority of the chimpanzees’ sign utterances were contextually consistent. The chimpanzees’ sign utterances were scored as contextually consistent if they were used in previous high arousal interactions which were not part of the current study. Individual chimpanzee differences in signing frequency, emphatic modulation, and recipient allocation were found. Similar to humans, the chimpanzees’ verbal communication is a robust phenomena which continues to occur even during high arousal interactions.  相似文献   

8.
Identification of vocalizers was examined using an auditory-visual matching-to-sample task with a female chimpanzee. She succeeded in selecting the picture of the vocalizer in response to various types of vocalizations: pant hoots, pant grunts, and screams. When pant hoots by two chimpanzees were presented as a "duet", she could identify both of the vocalizers. These results suggest that researchers have underestimated the capability of vocalizer identification in chimpanzees. The chimpanzee correctly chose her own pictures in response to her vocalizations only by exclusion, and she did not show vocal self-recognition. The effect of acoustical modification (pitch shift and filtration) on the performance suggested that pitch is an important cue for the vocalizer identification.  相似文献   

9.
This study investigated chimpanzees spontaneous spatial constructions with objects and especially their ability to repeat inter-object spatial relations, which is basic to understanding spatial relations at a higher level than perception or recognition. Subjects were six chimpanzees—four chimpanzees and two bonobos—aged 6–21 years, all raised in a human environment from an early age. Only minor species differences, but considerable individual differences were found. The effect of different object samples was assessed through a comparison with a previous study. A common overall chimpanzee pattern was also found. Chimpanzees repeated different types of inter-object spatial relations such as insertion (I), or vertical (V), or next-to (H) relations. However chimpanzees repeated I or V relations with more advanced procedures than when repeating H relations. Moreover, chimpanzees never repeated combined HV relations. Compared with children, chimpanzees showed a specific difficulty in repeating H relations. Repeating H relations is crucial for representing and understanding multiple reciprocal spatial relations between detached elements and for coordinating independent positions in space. Therefore, the chimpanzees difficulty indicates a fundamental difference in constructive space in comparison to humans. The findings are discussed in relation to issues of spatial cognition and tool use.  相似文献   

10.
The neuropeptide alpha-MSH has been proposed to influence learning and memory by increasing visual attention. To test the possibility that MSH selectively affects visual learning, rats were tested in learning tasks in which the cues were either visual or auditory. Maze and bar-press tasks were used. MSH administration increased the rate of learning of the visual tasks, regardless of the task difficulty or the type of response required of the rat. MSH had no effect on the rate of learning of the auditory tasks. These results support the hypothesis that MSH facilitates learning by influencing some aspect of visual information processing.  相似文献   

11.
Do chimpanzees tailor their communication in accordance with the attentional status of a human observer? We presented 57 chimpanzees with three experimental conditions in randomized order: an experimenter offered a banana to the focal subject (Focal), to a cagemate of the focal subject (In-Cage) and to a chimpanzee in an adjacent cage (Adjacent) while a second experimenter recorded the first and second responses of the focal subject in all three conditions. The chimpanzees' behaviour was mostly visual or bimodal in the Focal condition, changing to auditory behaviour or disengagement in the In-Cage and Adjacent conditions. Thus, with no explicit training and on their first trials in all instances, the chimpanzees tactically deployed their communicative behaviours in the visual and auditory domains in accordance with the manipulated attentional and intentional status of a human observer.  相似文献   

12.
The present rat experiment evaluated the validity of two formal accounts of configural learning in the framework of discrimination tasks involving the serial presentation of feature and target stimuli: Rescorla's (1973) modification of the Rescorla-Wagner model (1972) and the Pearce model (1987). The first, ambiguous feature task was of the form X-->A+, Y-->A-, X-->B-, Y-->B+, in which X and Y represent visual features, '-->' signifies a serial arrangement, A and B are auditory target stimuli, and '+' and '-' symbolise food-reinforcement and non-reinforcement, respectively. The second, non-ambiguous feature task was of the form: X-->A+, Y-->A-, X-->B+, Y-->B-. The former task was much more difficult to solve than was the latter task. The Rescorla model is able to account for the observed differences between the two tasks in learning rates and in the associative strength of feature X with more plausible parameter values than is the Pearce model. It is suggested that models acknowledging a role for both elemental and configural learning can better account for discrimination learning in discrimination tasks of the sort presented in this study than do models that exclusively allow for configural learning.  相似文献   

13.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

14.
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions.  相似文献   

15.
The question of which strategy is employed in human decision making has been studied extensively in the context of cognitive tasks; however, this question has not been investigated systematically in the context of perceptual tasks. The goal of this study was to gain insight into the decision-making strategy used by human observers in a low-level perceptual task. Data from more than 100 individuals who participated in an auditory-visual spatial localization task was evaluated to examine which of three plausible strategies could account for each observer''s behavior the best. This task is very suitable for exploring this question because it involves an implicit inference about whether the auditory and visual stimuli were caused by the same object or independent objects, and provides different strategies of how using the inference about causes can lead to distinctly different spatial estimates and response patterns. For example, employing the commonly used cost function of minimizing the mean squared error of spatial estimates would result in a weighted averaging of estimates corresponding to different causal structures. A strategy that would minimize the error in the inferred causal structure would result in the selection of the most likely causal structure and sticking with it in the subsequent inference of location—“model selection.” A third strategy is one that selects a causal structure in proportion to its probability, thus attempting to match the probability of the inferred causal structure. This type of probability matching strategy has been reported to be used by participants predominantly in cognitive tasks. Comparing these three strategies, the behavior of the vast majority of observers in this perceptual task was most consistent with probability matching. While this appears to be a suboptimal strategy and hence a surprising choice for the perceptual system to adopt, we discuss potential advantages of such a strategy for perception.  相似文献   

16.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

17.
Two distinct conceptualisations of processing mechanisms have been proposed in the research on the perception of temporal order, one that assumes a central-timing mechanism that is involved in the detection of temporal order independent of modality and stimulus type, another one assuming feature-specific mechanisms that are dependent on stimulus properties. In the present study, four different temporal-order judgement tasks were compared to test these two conceptualisations, that is, to determine whether common processes underlie temporal-order thresholds over different modalities and stimulus types or whether distinct processes are related to each task. Measurements varied regarding modality (visual and auditory) and stimulus properties (auditory modality: clicks and tones; visual modality: colour and position). Results indicate that the click and the tone paradigm, as well as the colour and position paradigm, correlate with each other. Besides these intra-modal relationships, cross-modal correlations show dependencies between the click, the colour and the position tasks. Both processing mechanisms seem to influence the detection of temporal order. While two different tones are integrated and processed by a more independent, possibly feature-specific mechanism, a more central, modality-independent timing mechanism contributes to the click, colour and position condition.  相似文献   

18.
A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3–1.7 degrees, or 22–28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.  相似文献   

19.
Mismatch negativity of ERP in cross-modal attention   总被引:1,自引:0,他引:1  
Event-related potentials were measured in 12 healthy youth subjects aged 19-22 using the paradigm "cross-modal and delayed response" which is able to improve unattended purity and to avoid the effect of task target on the deviant components of ERP. The experiment included two conditions: (i) Attend visual modality, ignore auditory modality; (ii) attend auditory modality, ignore visual modality. The stimuli under the two conditions were the same. The difference wave was obtained by subtracting ERPs of the standard stimuli from that of the deviant stim-uli. The present results showed that mismatch negativity (MMN), N2b and P3 components can be produced in the auditory and visual modalities under attention condition. However, only MMN was observed in the two modalities un-der inattention condition. Auditory and visual MMN have some features in common: their largest MMN wave peaks were distributed respectively over their primary sensory projection areas of the scalp under attention condition, but over front  相似文献   

20.
Cognitive task demands in one sensory modality (T1) can have beneficial effects on a secondary task (T2) in a different modality, due to reduced top-down control needed to inhibit the secondary task, as well as crossmodal spread of attention. This contrasts findings of cognitive load compromising a secondary modality’s processing. We manipulated cognitive load within one modality (visual) and studied the consequences of cognitive demands on secondary (auditory) processing. 15 healthy participants underwent a simultaneous EEG-fMRI experiment. Data from 8 participants were obtained outside the scanner for validation purposes. The primary task (T1) was to respond to a visual working memory (WM) task with four conditions, while the secondary task (T2) consisted of an auditory oddball stream, which participants were asked to ignore. The fMRI results revealed fronto-parietal WM network activations in response to T1 task manipulation. This was accompanied by significantly higher reaction times and lower hit rates with increasing task difficulty which confirmed successful manipulation of WM load. Amplitudes of auditory evoked potentials, representing fundamental auditory processing showed a continuous augmentation which demonstrated a systematic relation to cross-modal cognitive load. With increasing WM load, primary auditory cortices were increasingly deactivated while psychophysiological interaction results suggested the emergence of auditory cortices connectivity with visual WM regions. These results suggest differential effects of crossmodal attention on fundamental auditory processing. We suggest a continuous allocation of resources to brain regions processing primary tasks when challenging the central executive under high cognitive load.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号