首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory "cue" signaling the location where a subsequent "target" sound was likely to be presented. The target was occasionally replaced by an unexpected "novel" sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone "standard" trains. The sound of clustered fMRI acquisition (starting at t?=?7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra--parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found -evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues.  相似文献   

2.
The influence of stimulus duration on auditory evoked potentials (AEPs) was examined for tones varying randomly in duration, location, and frequency in an auditory selective attention task. Stimulus duration effects were isolated as duration difference waves by subtracting AEPs to short duration tones from AEPs to longer duration tones of identical location, frequency and rise time. This analysis revealed that AEP components generally increased in amplitude and decreased in latency with increments in signal duration, with evidence of longer temporal integration times for lower frequency tones. Different temporal integration functions were seen for different N1 subcomponents. The results suggest that different auditory cortical areas have different temporal integration times, and that these functions vary as a function of tone frequency.  相似文献   

3.
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.  相似文献   

4.
Comparison of auditory functions in the chimpanzee and human   总被引:3,自引:0,他引:3  
Absolute thresholds for pure tones, loudness, frequency and intensity difference thresholds and the resonance of the external auditory meatus were measured in chimpanzees and compared with those in humans. Chimpanzees were more sensitive than humans to frequencies higher than 8 kHz but less sensitive to frequencies lower than 250 Hz and 2- to 4-kHz tones. Difference thresholds for frequency and intensity were greater in chimpanzees than in humans. The resonance of the external ear was about the same in the two species. The effects of differences in hearing between species upon speech perception are discussed.  相似文献   

5.
Duration estimation is known to be far from veridical and to differ for sensory estimates and motor reproduction. To investigate how these differential estimates are integrated for estimating or reproducing a duration and to examine sensorimotor biases in duration comparison and reproduction tasks, we compared estimation biases and variances among three different duration estimation tasks: perceptual comparison, motor reproduction, and auditory reproduction (i.e. a combined perceptual-motor task). We found consistent overestimation in both motor and perceptual-motor auditory reproduction tasks, and the least overestimation in the comparison task. More interestingly, compared to pure motor reproduction, the overestimation bias was reduced in the auditory reproduction task, due to the additional reproduced auditory signal. We further manipulated the signal-to-noise ratio (SNR) in the feedback/comparison tones to examine the changes in estimation biases and variances. Considering perceptual and motor biases as two independent components, we applied the reliability-based model, which successfully predicted the biases in auditory reproduction. Our findings thus provide behavioral evidence of how the brain combines motor and perceptual information together to reduce duration estimation biases and improve estimation reliability.  相似文献   

6.
Humans routinely segregate a complex acoustic scene into different auditory streams, through the extraction of bottom-up perceptual cues and the use of top-down selective attention. To determine the neural mechanisms underlying this process, neural responses obtained through magnetoencephalography (MEG) were correlated with behavioral performance in the context of an informational masking paradigm. In half the trials, subjects were asked to detect frequency deviants in a target stream, consisting of a rhythmic tone sequence, embedded in a separate masker stream composed of a random cloud of tones. In the other half of the trials, subjects were exposed to identical stimuli but asked to perform a different task—to detect tone-length changes in the random cloud of tones. In order to verify that the normalized neural response to the target sequence served as an indicator of streaming, we correlated neural responses with behavioral performance under a variety of stimulus parameters (target tone rate, target tone frequency, and the “protection zone”, that is, the spectral area with no tones around the target frequency) and attentional states (changing task objective while maintaining the same stimuli). In all conditions that facilitated target/masker streaming behaviorally, MEG normalized neural responses also changed in a manner consistent with the behavior. Thus, attending to the target stream caused a significant increase in power and phase coherence of the responses in recording channels correlated with an increase in the behavioral performance of the listeners. Normalized neural target responses also increased as the protection zone widened and as the frequency of the target tones increased. Finally, when the target sequence rate increased, the buildup of the normalized neural responses was significantly faster, mirroring the accelerated buildup of the streaming percepts. Our data thus support close links between the perceptual and neural consequences of the auditory stream segregation.  相似文献   

7.
The auditory sensory organ, the cochlea, not only detects but also generates sounds. Such sounds, otoacoustic emissions, are widely used for diagnosis of hearing disorders and to estimate cochlear nonlinearity. However, the fundamental question of how the otoacoustic emission exits the cochlea remains unanswered. In this study, emissions were provoked by two tones with a constant frequency ratio, and measured as vibrations at the basilar membrane and at the stapes, and as sound pressure in the ear canal. The propagation direction and delay of the emission were determined by measuring the phase difference between basilar membrane and stapes vibrations. These measurements show that cochlea-generated sound arrives at the stapes earlier than at the measured basilar membrane location. Data also show that basilar membrane vibration at the emission frequency is similar to that evoked by external tones. These results conflict with the backward-traveling-wave theory and suggest that at low and intermediate sound levels, the emission exits the cochlea predominantly through the cochlear fluids.  相似文献   

8.
We present a neurocomputational model for auditory streaming, which is a prominent phenomenon of auditory scene analysis. The proposed model represents auditory scene analysis by oscillatory correlation, where a perceptual stream corresponds to a synchronized assembly of neural oscillators and different streams correspond to desynchronized oscillator assemblies. The underlying neural architecture is a two-dimensional network of relaxation oscillators with lateral excitation and global inhibition, where one dimension represents time and another dimension frequency. By employing dynamic connections along the frequency dimension and a random element in global inhibition, the proposed model produces a temporal coherence boundary and a fissure boundary that closely match those from the psychophysical data of auditory streaming. Several issues are discussed, including how to represent physical time and how to relate shifting synchronization to auditory attention.  相似文献   

9.
IF Lin  M Kashino 《PloS one》2012,7(7):e41661
In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.  相似文献   

10.
Tympanal hearing organs of insects emit distortion–product otoacoustic emissions (DPOAEs), which in mammals are used as indicator for nonlinear cochlear amplification, and which are highly vulnerable to manipulations interfering with the animal’s physiological state. Although in previous studies, evidence was provided for the involvement of auditory mechanoreceptors, the source of DPOAE generation and possible active mechanisms in tympanal organs remained unknown. Using laser Doppler vibrometry in the locust ear, we show that DPOAEs mechanically emerge at the tympanum region where the auditory mechanoreceptors are attached. Those emission-coupled vibrations differed remarkably from tympanum waves evoked by external pure tones of the same frequency, in terms of wave propagation, energy distribution, and location of amplitude maxima. Selective inactivation of the auditory receptor cells by mechanical lesions did not affect the tympanum’s response to external pure tones, but abolished the emission’s displacement amplitude peak. These findings provide evidence that tympanal auditory receptors, comparable to the situation in mammals, comprise the required nonlinear response characteristics, which during two-tone stimulation lead to additional, highly localized deflections of the tympanum.  相似文献   

11.
Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen.  相似文献   

12.
本文旨在研究音乐训练是否能增强基于音高和空间位置的听觉选择性注意力,以及音乐训练对听觉可塑性的神经机制.听觉感知实验中,受试者根据音高差异或空间位置差异,选择两个同时播放的数字之一.听觉认知实验在安静和噪声环境中播放频率分辨率不同的复合音,记录受试者听觉脑干频率跟随响应(frequency-following responses,FFRs).本文提出分析FFR的四种方法,即包络相关频率跟随响应(envelope-related frequency-following response,FFRENV)的短时锁相值、瞬时相位差极性图、相位差均值矢量以及时间细节结构相关频率跟随响应(temporal-fine-structure-related frequency-following response,FFRTFS)的幅度谱信噪比.实验结果表明,在完成基于音高的任务时,受过音乐训练的受试者准确率更高、反应时间更短.外界噪声不影响两组人群在基频(fundamental frequency,F0)的神经元锁相能力,但是显著降低了谐波处的神经元锁相能力.受过音乐训练的受试者的神经元在基频处的锁相能力和谐波处抗噪能力均增强,且其FFRTFS幅度谱信噪比与基于音高的行为学准确率呈正相关.因此,受过音乐训练的受试者其音高选择性注意感知能力的提高取决于认知神经能力的增强,经过音乐训练后,F0处FFRENV的锁相能力、谐波处FFRTFS的抗噪和持续锁相能力以及谐波处FFRTFS幅度谱信噪比均明显增强.音乐训练对听觉选择性注意具有显著的可塑性.  相似文献   

13.
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.  相似文献   

14.
The present study investigates hemispheric asymmetries in the neural adaptation processes occurring during alternating auditory stimulation. Stimuli were two monaural pure tones having a frequency of 400 or 800 Hz and a duration of 500 ms. Electroencephalogram (EEG) was recorded from 14 volunteers during the presentation of the following stimulus sequences, lasting 12 s each: 1) evoked potentials (EP condition, control), 2) alternation of frequency and ear (FE condition), 3) alternation of frequency (F condition), and 4) alternation of ear (E condition). Main results showed that in the central area of the left hemisphere (around C3 site) the N100 response underwent adaptation in all patterns of alternation, whereas in the same area of the right hemisphere the tones presented at the right ear in the FE produced no adaptation. Moreover, the responses to right-ear stimuli showed a difference between hemispheres in the E condition, which produced less adaptation in the left hemisphere. These effects are discussed in terms of lateral symmetry as a product of hemispheric, pathway and ear asymmetries.  相似文献   

15.
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.  相似文献   

16.
Recent studies have shown that auditory scene analysis involves distributed neural sites below, in, and beyond the auditory cortex (AC). However, it remains unclear what role each site plays and how they interact in the formation and selection of auditory percepts. We addressed this issue through perceptual multistability phenomena, namely, spontaneous perceptual switching in auditory streaming (AS) for a sequence of repeated triplet tones, and perceptual changes for a repeated word, known as verbal transformations (VTs). An event-related fMRI analysis revealed brain activity timelocked to perceptual switching in the cerebellum for AS, in frontal areas for VT, and the AC and thalamus for both. The results suggest that motor-based prediction, produced by neural networks outside the auditory system, plays essential roles in the segmentation of acoustic sequences both in AS and VT. The frequency of perceptual switching was determined by a balance between the activation of two sites, which are proposed to be involved in exploring novel perceptual organization and stabilizing current perceptual organization. The effect of the gene polymorphism of catechol-O-methyltransferase (COMT) on individual variations in switching frequency suggests that the balance of exploration and stabilization is modulated by catecholamines such as dopamine and noradrenalin. These mechanisms would support the noteworthy flexibility of auditory scene analysis.  相似文献   

17.
Whereas extensive neuroscientific and behavioral evidence has confirmed a role of auditory-visual integration in representing space [1-6], little is known about the role of auditory-visual integration in object perception. Although recent neuroimaging results suggest integrated auditory-visual object representations [7-11], substantiating behavioral evidence has been lacking. We demonstrated auditory-visual integration in the perception of face gender by using pure tones that are processed in low-level auditory brain areas and that lack the spectral components that characterize human vocalization. When androgynous faces were presented together with pure tones in the male fundamental-speaking-frequency range, faces were more likely to be judged as male, whereas when faces were presented with pure tones in the female fundamental-speaking-frequency range, they were more likely to be judged as female. Importantly, when participants were explicitly asked to attribute gender to these pure tones, their judgments were primarily based on relative pitch and were uncorrelated with the male and female fundamental-speaking-frequency ranges. This perceptual dissociation of absolute-frequency-based crossmodal-integration effects from relative-pitch-based explicit perception of the tones provides evidence for a sensory integration of auditory and visual signals in representing human gender. This integration probably develops because of concurrent neural processing of visual and auditory features of gender.  相似文献   

18.
Spatiotemporal response patterns in the anterior and dorsocaudal fields of the guinea pig auditory cortex after two-tone sequences were studied in anesthetized animals (Nembutal 30 mg kg−1) using an optical recording method (voltage-sensitive dye RH795, 12 × 12 photodiode array). Each first (masker) and second (probe) tone was 30 ms long with a 10-ms rise-fall time. Masker-probe pair combinations of the same or different frequencies with probe delays of 30–150 ms were presented to the ear contralateral to the recording side. With same-frequency pairs, responses to the probe were inhibited completely after probe delays of less than 50 ms and the inhibition lasted for more than 150 ms, and the inhibition magnitudes in different isofrequency bands of the anterior field were essentially the same. With different-frequency (octave-separated) pairs, responses to the probe were not inhibited completely even after probe delays as short as 30 ms, and the inhibition lasted only for 110–130 ms. Inhibition magnitudes were different from location to location. Accepted: 4 August 1997  相似文献   

19.
Previous work has demonstrated that upcoming saccades influence visual and auditory performance even for stimuli presented before the saccade is executed. These studies suggest a close relationship between saccade generation and visual/auditory attention. Furthermore, they provide support for Rizzolatti et al.'s premotor model of attention, which suggests that the same circuits involved in motor programming are also responsible for shifts in covert orienting (shifting attention without moving the eyes or changing posture). In a series of experiments, we demonstrate that saccade programming also affects tactile perception. Participants made speeded saccades to the left and right side as well as tactile discriminations of up versus down. The first experiment demonstrates that participants were reliably faster at responding to tactile stimuli near the location of upcoming saccades. In our second experiment, we had the subjects cross their hands and demonstrated that the effect occurs in visual space (rather than the early representations of touch). In our third experiment, the tactile events usually occurred on the opposite side of upcoming eye movement. We found that the benefit at the saccade target location vanished, suggesting that this shift is not obligatory but that it may be vetoed on the basis of expectation.  相似文献   

20.
It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号