首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
ABSTRACT

The preponderance of our knowledge concerning hearing in the Cetacea has come from psychophysical studies. The most widely used alternative to psychophysical studies are neurophysiological studies utilizing auditory evoked potentials (AEPs). Due in part to the hypertrophy of auditory structures exhibited by the Cetacea, AEPs are highly robust and rapidly and easily obtained. Moreover, because AEPs reflect the synchronized activity of large neuronal assemblies, they offer a high level window onto auditory processing and allow for across-species comparison of responses.

Recent studies utilizing AEP techniques have demonstrated that the cetaceans have extremely high temporal resolution with integration times in the order of 300 msec. Remarkably, these animals also exhibit extremely sharp frequency tuning with auditory filters having Q10 of 20–30.  相似文献   

2.
Temporal cues are important for some forms of auditory processing, such as echolocation. Among odontocetes (toothed whales, dolphins, and porpoises), it has been suggested that porpoises may have temporal processing abilities which differ from other odontocetes because of their relatively narrow auditory filters and longer duration echolocation signals. This study examined auditory temporal resolution in two Yangtze finless porpoises (Neophocaena phocaenoides asiaeorientalis) using auditory evoked potentials (AEPs) to measure: (a) rate following responses and modulation rate transfer function for 100 kHz centered pulse sounds and (b) hearing thresholds and response amplitudes generated by individual pulses of different durations. The animals followed pulses well at modulation rates up to 1,250 Hz, after which response amplitudes declined until extinguished beyond 2,500 Hz. The subjects had significantly better hearing thresholds for longer, narrower-band pulses similar to porpoise echolocation signals compared to brief, broadband sounds resembling dolphin clicks. Results indicate that the Yangtze finless porpoise follows individual acoustic signals at rates similar to other odontocetes tested. Relatively good sensitivity for longer duration, narrow-band signals suggests that finless porpoise hearing is well suited to detect their unique echolocation signals.  相似文献   

3.
Miki A  Santi A 《Behavioural processes》2001,53(1-2):103-111
Previous animal research has traditionally used arbitrary stimuli to investigate timing in a temporal bisection procedure. The current study compared the timing of the duration of an arbitrary, auditory stimulus (a 500-Hz tone) to the timing of the duration of a naturalistic, auditory stimulus (a pigeon cooing). In the first phase of this study, temporal perception was assessed by comparing psychophysical functions for the duration of tone and cooing signals. In the first set of tests, the point of subjective equality (PSE) was significantly lower for the tone than for the cooing stimulus, indicating that tones were judged longer than equivalent durations of cooing. In the second set of tests, gaps were introduced in the tone signal to match those present in the cooing signal, and no significant difference in the PSE for the tone or the cooing signal was found. A repetition of the testing conducted with gaps removed from the tone signal, failed to replicate the difference in the PSEs for the tone and cooing signals originally obtained. In the second phase of the study, memory for the duration of tone and cooing was examined, and a choose-long bias was found for both signals. Based on these results, it appears that, for pigeons, there may be no significant differences in either temporal perception or temporal memory for arbitrary, auditory signals and more complex, naturalistic, auditory signals.  相似文献   

4.
In human neurophysiology, auditory event-related potentials (AEPs) are used to investigate cognitive processes such as selective attention. Selective attention to specific tones causes a negative enhancement of AEPs known as processing negativity (PN), which is reduced in patients with schizophrenia. The evidence suggests that impaired selective attention in these patients may partially depend on deficient N-methyl-D-aspartate receptor (NMDAR)-mediated signaling. The goal of this study was to corroborate the involvement of the NMDAR in selective attention using a mouse model. To this end, we first investigated the presence of PN-like activity in C57BL/6J mice by recording AEPs during a fear-conditioning paradigm. Two alternating trains of tones, differing in stimulus duration, were presented on 7 subsequent days. One group received a mild foot shock delivered within the presentation of one train (conditioning train) on days 3-5 (conditioning days), while controls were never shocked. The fear-conditioned group (n= 9) indeed showed a PN-like activity during conditioning days manifested as a significant positive enhancement in the AEPs to the stimuli in the conditioning train that was not observed in the controls. The same paradigm was then applied to mice with reduced expression of the NMDAR1 (NR1) subunit and to a wild-type control group (each group n= 6). The NR1 mutants showed an associative AEP enhancement, but its magnitude was significantly reduced as compared with the magnitude in wild-type mice. We conclude that electrophysiological manifestations of selective attention are observable yet of different polarity in mice and that they require intact NMDAR-mediated signaling. Thus, deficient NMDAR functioning may contribute to abnormal selective attention in schizophrenia.  相似文献   

5.
Auditory evoked potentials (AEPs) to 40 Hz clicks and amplitude-modulated 500 Hz tones in human subjects were digitally filtered using an optimal (‘Wiener’) filter uniquely determined for each AEP. Use of coherence functions to compute coefficients appropriate for filtering grand average AEPs or subsets such as split-half averages is described. Wiener-filtered AEPs correlated better than unfiltered AEPs with split-half replicates and with references AEPs (obtained with long data collection periods). Visual detection thresholds were lower (more sensitive) for the Wiener-filtered AEPs, but not as low as objectively determined thresholds using coherence values.  相似文献   

6.
IF Lin  M Kashino 《PloS one》2012,7(7):e41661
In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.  相似文献   

7.
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.  相似文献   

8.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

9.
Neural selectivity to signal duration within the auditory midbrain has been observed in several species and is thought to play a role in signal recognition. Here we examine the effects of signal duration on the coding of individual and concurrent vocal signals in a teleost fish with exceptionally long duration vocalizations, the plainfin midshipman, Porichthys notatus. Nesting males produce long-duration, multi-harmonic signals known as hums to attract females to their nests; overlapping hums produce acoustic beats at the difference frequency of their spectral components. Our data show that all midbrain neurons have sustained responses to long-duration hum-like tones and beats. Overall spike counts increase linearly with signal duration, although spike rates decrease dramatically. Neurons show varying degrees of spike rate decline and hence, differential changes in spike rate across the neuron population may code signal duration. Spike synchronization to beat difference frequency progressively increases throughout long-duration beats such that significant difference frequency coding is maintained in most neurons. The significance level of difference frequency synchronization coding increases by an order of magnitude when integrated over the entirety of long-duration signals. Thus, spike synchronization remains a reliable difference frequency code and improves with integration over longer time spans.  相似文献   

10.
Studies of auditory temporal resolution in birds have traditionally examined processing capabilities by assessing behavioral discrimination of sounds varying in temporal structure. Here, temporal resolution of the brown-headed cowbird (Molothrus ater) was measured using two auditory evoked potential (AEP)-based methods: auditory brainstem responses (ABRs) to paired clicks and envelope following responses (EFRs) to amplitude-modulated tones. The basic patterns observed in cowbirds were similar to those found in other songbird species, suggesting similar temporal processing capabilities. The amplitude of the ABR to the second click was less than that of the first click at inter-click intervals less than 10 ms, and decreased to 30% at an interval of 1 ms. EFR amplitude was generally greatest at modulation frequencies from 335 to 635 Hz and decreased at higher and lower modulation frequencies. Compared to data from terrestrial mammals these results support recent behavioral findings of enhanced temporal resolution in birds. General agreement between these AEP results and behaviorally based studies suggests that AEPs can provide a useful assessment of temporal resolution in wild bird species.  相似文献   

11.
The phase of cortical oscillations contains rich information and is valuable for encoding sound stimuli. Here we hypothesized that oscillatory phase modulation, instead of amplitude modulation, is a neural correlate of auditory streaming. Our behavioral evaluation provided compelling evidences for the first time that rats are able to organize auditory stream. Local field potentials (LFPs) were investigated in the cortical layer IV or deeper in the primary auditory cortex of anesthetized rats. In response to ABA- sequences with different inter-tone intervals and frequency differences, neurometric functions were characterized with phase locking as well as the band-specific amplitude evoked by test tones. Our results demonstrated that under large frequency differences and short inter-tone intervals, the neurometric function based on stimulus phase locking in higher frequency bands, particularly the gamma band, could better describe van Noorden’s perceptual boundary than the LFP amplitude. Furthermore, the gamma-band neurometric function showed a build-up-like effect within around 3 seconds from sequence onset. These findings suggest that phase locking and amplitude have different roles in neural computation, and support our hypothesis that temporal modulation of cortical oscillations should be considered to be neurophysiological mechanisms of auditory streaming, in addition to forward suppression, tonotopic separation, and multi-second adaptation.  相似文献   

12.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

13.
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception.  相似文献   

14.
An auditory neuron can preserve the temporal fine structure of a low-frequency tone by phase-locking its response to the stimulus. Apart from sound localization, however, much about the role of this temporal information for signal processing in the brain remains unknown. Through psychoacoustic studies we provide direct evidence that humans employ temporal fine structure to discriminate between frequencies. To this end we construct tones that are based on a single frequency but in which, through the concatenation of wavelets, the phase changes randomly every few cycles. We then test the frequency discrimination of these phase-changing tones, of control tones without phase changes, and of short tones that consist of a single wavelet. For carrier frequencies below a few kilohertz we find that phase changes systematically worsen frequency discrimination. No such effect appears for higher carrier frequencies at which temporal information is not available in the central auditory system.  相似文献   

15.
Auditory evoked potentials were recorded to onset and offset of synthesised instrumental tones in 40 normal subjects, 20 right-handed for writing and 20 left-handed. The majority of both groups showed a T-complex which was larger at the right temporal electrode (T4) than the left (T3). In the T4-T3 difference waveforms, the mean potential between latencies of 130 and 165 ms was negative in all right-handed subjects except two for whom the waveforms were marginally positive-going. Amongst the left-handers, however, this converse asymmetry was seen in 7 subjects, 5 of them more than 2 standard deviations from the mean of the right-handed group. The degree of asymmetry was not significantly correlated with the degree of left-handedness according to the Edinburgh Handedness Inventory. Asymmetry of the T-complex to instrumental tones appears to reflect the lateralisation of auditory `musical' processing in the temporal cortex, confirming evidence from other sources including PET that this is predominantly right-sided in the majority of individuals. The proportion of left-handers showing the converse laterality is roughly in accordance with those likely to be right-hemisphere-dominant for language. If linguistic and `musical' processes are consistently located in opposite hemispheres, AEPs to complex tones may prove a useful tool in establishing functional lateralisation.  相似文献   

16.
Neural responses to tones in the mammalian primary auditory cortex (A1) exhibit adaptation over the course of several seconds. Important questions remain about the taxonomic distribution of multi-second adaptation and its possible roles in hearing. It has been hypothesized that neural adaptation could explain the gradual “build-up” of auditory stream segregation. We investigated the influence of several stimulus-related factors on neural adaptation in the avian homologue of mammalian A1 (field L2) in starlings (Sturnus vulgaris). We presented awake birds with sequences of repeated triplets of two interleaved tones (ABA–ABA–…) in which we varied the frequency separation between the A and B tones (ΔF), the stimulus onset asynchrony (time from tone onset to onset within a triplet), and tone duration. We found that stimulus onset asynchrony generally had larger effects on adaptation compared with ΔF and tone duration over the parameter range tested. Using a simple model, we show how time-dependent changes in neural responses can be transformed into neurometric functions that make testable predictions about the dependence of the build-up of stream segregation on various spectral and temporal stimulus properties.  相似文献   

17.
Responses of multi-units in the auditory cortex (AC) of unanaesthetized Mongolian gerbils to pure tones and to linearly frequency modulated (FM) sounds were analysed. Three types of responses to pure tones could be clearly distinguished on the basis of spectral tuning properties, response latencies and overall temporal response pattern. In response to FM sweeps these three types discharged in a temporal pattern similar to tone responses. However, for all type-1 units the latencies of some phasic response components shifted systematically as a function of range and/or speed of modulation. Measurements of response latencies to FMs revealed that such responses were evoked whenever the modulation reached a particular instantaneous frequency (Fi). Effective Fi was: (1) independent of modulation range and speed, (2) always reached before the modulation arrived at a local maximum of the frequency response function (FRF) and consequently differed for downward and upward sweeps, and (3) was correlated with the steepest slope of that FRF maximum. The three different types of units were found in discrete and separate fields or regions of the AC. It is concluded that gross temporal response properties are one of the key features distinguishing auditory cortical regions in the Mongolian gerbil. Accepted: 13 August 1997  相似文献   

18.
The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.  相似文献   

19.
Recently, there has been upsurge of interest in the neural mechanisms of time perception. A central question is whether the representation of time is distributed over brain regions as a function of stimulus modality, task and length of the duration used or whether it is centralized in a single specific and supramodal network. The answers seem to be converging on the former, and many areas not primarily considered as temporal processing areas remain to be investigated in the temporal domain. Here we asked whether the superior temporal gyrus, an auditory modality specific area, is involved in processing of auditory timing. Repetitive transcranial magnetic stimulation was applied over left and right superior temporal gyri while participants performed either a temporal or a frequency discrimination task of single tones. A significant decrease in performance accuracy was observed after stimulation of the right superior temporal gyrus, in addition to an increase in response uncertainty as measured by the Just Noticeable Difference. The results are specific to auditory temporal processing and performance on the frequency task was not affected. Our results further support the idea of distributed temporal processing and speak in favor of the existence of modality specific temporal regions in the human brain.  相似文献   

20.
The fish auditory system encodes important acoustic stimuli used in social communication, but few studies have examined response properties of central auditory neurons to natural signals. We determined the features and responses of single hindbrain and midbrain auditory neurons to tone bursts and playbacks of conspecific sounds in the soniferous damselfish, Abudefduf abdominalis. Most auditory neurons were either silent or had slow irregular resting discharge rates <20 spikes s−1. Average best frequency for neurons to tone stimuli was ~130 Hz but ranged from 80 to 400 Hz with strong phase-locking. This low-frequency sensitivity matches the frequency band of natural sounds. Auditory neurons were also modulated by playbacks of conspecific sounds with thresholds similar to 100 Hz tones, but these thresholds were lower than that of tones at other test frequencies. Thresholds of neurons to natural sounds were lower in the midbrain than the hindbrain. This is the first study to compare response properties of auditory neurons to both simple tones and complex stimuli in the brain of a recently derived soniferous perciform that lacks accessory auditory structures. These data demonstrate that the auditory fish brain is most sensitive to the frequency and temporal components of natural pulsed sounds that provide important signals for conspecific communication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号