首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The fish auditory system encodes important acoustic stimuli used in social communication, but few studies have examined response properties of central auditory neurons to natural signals. We determined the features and responses of single hindbrain and midbrain auditory neurons to tone bursts and playbacks of conspecific sounds in the soniferous damselfish, Abudefduf abdominalis. Most auditory neurons were either silent or had slow irregular resting discharge rates <20 spikes s−1. Average best frequency for neurons to tone stimuli was ~130 Hz but ranged from 80 to 400 Hz with strong phase-locking. This low-frequency sensitivity matches the frequency band of natural sounds. Auditory neurons were also modulated by playbacks of conspecific sounds with thresholds similar to 100 Hz tones, but these thresholds were lower than that of tones at other test frequencies. Thresholds of neurons to natural sounds were lower in the midbrain than the hindbrain. This is the first study to compare response properties of auditory neurons to both simple tones and complex stimuli in the brain of a recently derived soniferous perciform that lacks accessory auditory structures. These data demonstrate that the auditory fish brain is most sensitive to the frequency and temporal components of natural pulsed sounds that provide important signals for conspecific communication.  相似文献   

2.
The frequency resolving power of hearing (FRP) of the beluga whale Delphinapterus leucas was studied as dependent on influence of lasting low-intensity sounds (of the ultrasonic range from –20 to +10 dB). Testing of the spectrum ripple-phase reversal was used in conjunction with a noninvasive recording of auditory evoked potentials. FRP parameters were found to depend nonmonotonically on the intensity of the background noise. The resultant adaptation effects can be explained by the fact that, in response to the high-intensity signals, the auditory system sensitivity is reduced to the level optimal for analysis of these signals.  相似文献   

3.
Peripheral auditory frequency tuning in the ensiferan insect Cyphoderris monstrosa (Orthoptera: Haglidae) was examined by comparing tympanal vibrations and primary auditory receptor responses. In this species there is a mis-match between the frequency of maximal auditory sensitivity and the frequency content of the species' acoustic signals. The mis-match is not a function of the mechanical properties of the tympanum, but is evident at the level of primary receptors. There are two classes of primary receptors: low-tuned and broadly tuned. Differences in the absolute sensitivity of the two receptor types at the male song frequency would allow the auditory system to discriminate intraspecific signals from sounds containing lower frequencies. Comparisons of tympanal and receptor tuning indicated that the sensitivity of the broadly tuned receptors did not differ from that of the tympanum, while low-tuned receptors had significantly narrower frequency tuning. The results suggest that the limited specialization for the encoding of intraspecific signals in the auditory system of C. monstrosa is a primitive rather than a degenerate condition. The limited specialization of C. monstrosa may reflect the evolutionary origin of communication-related hearing from a generalized precursor through the addition of peripheral adaptations (tympana, additional receptors) to enhance frequency sensitivity and discrimination. Accepted: 13 March 1999  相似文献   

4.
We experimentally demonstrated that tonal acoustic signals with a carrier frequency of 140–200 Hz had a repellent effect on male mosquitoes (Culicidae). Swarming males of Aedes diantaeus were concentrated in a small space near the auxiliary attracting sound source which simulated the flight sound of conspecific females (carrier frequency 280–320 Hz). Then, the resulting cluster of attracted mosquitoes was stimulated with test signals of variable amplitude and carrier frequency from a second loudspeaker. The direction of mosquito flight from the source of test sounds and a decrease in their number above the attracting sound source were used as the criteria of behavioral response. Pronounced avoidance responses (negative phonotaxis) of swarming mosquitoes were observed in the range of 140–200 Hz. Most of the mosquitoes left the area above the attracting sound source within one second after the onset of the test signal. Mosquitoes mostly flew up, sideways, and backwards in relation to the test acoustic vector. We presume that mosquitoes develop defensive behavior against attacking predatory insects based on analysis of auditory information. The range of negative phonotaxis is limited at higher frequencies by the spectrum of the flight sounds of conspecific females, and in the low frequency range, by the increasing level of atmospheric noise.  相似文献   

5.

Background

The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding ‘rapid temporal processing’.

Methodology

A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech) which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET) was used to compare which brain regions were active when participants listened to the different sounds.

Conclusions

Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible) was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.  相似文献   

6.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

7.
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.  相似文献   

8.
The problem of interaction of spike neuronal activity evoked by successive sounds in single elements of auditory system is considered. The forward masking situation, when pairs of signals are presented independently, as well as the condition of long sequences of signals with different on-of ratios are analyzed. The strong increase of a diversity of single units ability to reproduce fast sequences really observed from the lowest to the higher nuclei of an auditory pathway. Complex units, reacting only on "new" signals, appear from midbrain region of auditory pathway. However such elements are found out usually not in a direct lemniscal auditory way, but in surrounding nuclei. While poststimulus adaptation to specified type of signals usually causes the considerable increase in threshold of detection, differential sensitivity to small changes can remain quite high. This aspect of auditory sensation remains poorly investigated both in physiological and in psychophysical experiments.  相似文献   

9.
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.  相似文献   

10.
Snakes are frequently described in both popular and technical literature as either deaf or able to perceive only groundborne vibrations. Physiological studies have shown that snakes are actually most sensitive to airborne vibrations. Snakes are able to detect both airborne and groundborne vibrations using their body surface (termed somatic hearing) as well as from their inner ears. The central auditory pathways for these two modes of "hearing" remain unknown. Recent experimental evidence has shown that snakes can respond behaviorally to both airborne and groundborne vibrations. The ability of snakes to contextualize the sounds and respond with consistent predatory or defensive behaviors suggests that auditory stimuli may play a larger role in the behavioral ecology of snakes than was previously realized. Snakes produce sounds in a variety of ways, and there appear to be multiple acoustic Batesian mimicry complexes among snakes. Analyses of the proclivity for sound production and the acoustics of the sounds produced within a habitat or phylogeny specific context may provide insights into the behavioral ecology of snakes. The relatively low information content in the sounds produced by snakes suggests that these sounds are not suitable for intraspecific communication. Nevertheless, given the diversity of habitats in which snakes are found, and their dual auditory pathways, some form of intraspecific acoustic communication may exist in some species.  相似文献   

11.
Can plants sense natural airborne sounds and respond to them rapidly? We show that Oenothera drummondii flowers, exposed to playback sound of a flying bee or to synthetic sound signals at similar frequencies, produce sweeter nectar within 3 min, potentially increasing the chances of cross pollination. We found that the flowers vibrated mechanically in response to these sounds, suggesting a plausible mechanism where the flower serves as an auditory sensory organ. Both the vibration and the nectar response were frequency‐specific: the flowers responded and vibrated to pollinator sounds, but not to higher frequency sound. Our results document for the first time that plants can rapidly respond to pollinator sounds in an ecologically relevant way. Potential implications include plant resource allocation, the evolution of flower shape and the evolution of pollinators sound. Finally, our results suggest that plants may be affected by other sounds as well, including anthropogenic ones.  相似文献   

12.
Tinnitus is the perception of sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. In experiment 1, we used a go/no-go paradigm to evaluate the target detection speed and the inhibitory control in tinnitus participants (TP) and control subjects (CS), both in unimodal and bimodal conditions in the auditory and visual modalities. We also tested whether the sound frequency used for target and distractors affected the performance. We observed that TP were slower and made more false alarms than CS in all unimodal auditory conditions. TP were also slower than CS in the bimodal conditions. In addition, when comparing the response times in bimodal and auditory unimodal conditions, the expected gain in bimodal conditions was present in CS, but not in TP when tinnitus-matched frequency sounds were used as targets. In experiment 2, we tested the sensitivity to cross-modal interference in TP during auditory and visual go/no-go tasks where each stimulus was preceded by an irrelevant pre-stimulus in the untested modality (e.g. high frequency auditory pre-stimulus in visual no/no-go condition). We observed that TP had longer response times than CS and made more false alarms in all conditions. In addition, the highest false alarm rate occurred in TP when tinnitus-matched/high frequency sounds were used as pre-stimulus. We conclude that the inhibitory control is altered in TP and that TP are abnormally sensitive to cross-modal interference, reflecting difficulties to ignore irrelevant stimuli. The fact that the strongest interference effect was caused by tinnitus-like auditory stimulation is consistent with the hypothesis according to which such stimulations generate emotional responses that affect cognitive processing in TP. We postulate that executive functions deficits play a key-role in the perception and maintenance of tinnitus.  相似文献   

13.
Selective attention is the mechanism that allows focusing one’s attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.  相似文献   

14.
Temporal summation was estimated by measuring the detection thresholds for pulses with durations of 1–50 ms in the presence of noise maskers. The purpose of the study was to examine the effects of the spectral profiles and intensities of noise maskers on temporal summation, to investigate the appearance of signs of peripheral processing of pulses with various frequency-time structures in auditory responses, and to test the opportunity to use temporal summation for speech recognition. The central frequencies of pulses and maskers were similar. The maskers had ripple structures of the amplitude spectra of two types. In some maskers, the central frequencies coincided with the spectrum humps, whereas in other maskers, they coincided with spectrum dip (so-called on- and off-maskers). When the auditory system differentiated the masker humps, then the difference between the thresholds of recognition of the stimuli presented together with each of two types of maskers was not equal to zero. The assessment of temporal summation and the difference of the thresholds of pulse recognition under conditions of the presentation of the on- and off-maskers allowed us to make a conclusion on auditory sensitivity and the resolution of the spectral structure of maskers or frequency selectivity during presentation of pulses of various durations in local frequency areas. In order to estimate the effect of the dynamic properties of hearing on sensitivity and frequency selectivity, we changed the intensity of maskers. We measured temporal summation under the conditions of the presentation of on- and off-maskers of various intensities in two frequency ranges (2 and 4 kHz) in four subjects with normal hearing and one person with age-related hearing impairments who complained of a decrease in speech recognition under noise conditions. Pulses shorter than 10 ms were considered as simple models of consonant sounds, whereas tone pulses longer than 10 ms were considered as simple models of vowel sounds. In subjects with normal hearing in the range of moderate masker intensities, we observed an enhancement of temporal summation when the short pulses or consonant sounds were presented and an improvement of the resolution of the broken structure of masker spectra when the short and tone pulses, i.e., consonant and vowel sounds, were presented. We supposed that the enhancement of the summation was related to the refractoriness of the fibers of the auditory nerve. In the range of 4 kHz, the subject with age-related hearing impairments did not recognize the ripple structure of the maskers in the presence of the short pulses or consonant sounds. We supposed that these impairments were caused by abnormal synchronization of the responses of the auditory nerve fibers induced by the pulses, and this resulted in a decrease in speech recognition.  相似文献   

15.
In the dance language, honeybees use airborne near field sound signals to inform their nestmates of the location of food sources. In behavioral experiments it has recently been shown that Johnston's organ, a chordotonal organ located in the pedicel of the antenna, is used to perceive these sound signals. In the present study the mechanical response of the antennal flagellum to stimulation with near field sound signals was investigated using laser vibrometry. The absolute amplitudes of antennal deflection with acoustical stimulation, the response to sounds of different displacement and velocity amplitudes, the shape of movement of the flagellum, the mechanical frequency response and the mechanical directional sensitivity of the auditory sense organ of the honeybee are described. Using pulsed stimuli simulating the dance sounds it is shown that the temporal pattern of the dance sound is resolved on the level of antennal vibrations.  相似文献   

16.
Coloured rings are often used for marking bats so that specific individuals can be recognized. We noticed that the rings of mouse-eared bats, Myotis myotis and Myotis blythii, in a combination of one plastic-split and one metallic ring on the same forearm, emitted sounds that were largely ultrasonic each time the rings met in flight. We recorded the ring sounds and the echolocation calls produced by the bats, and played them back to neural preparations of lesser yellow underwing moths, Noctua comes, while making extracellular recordings from the moths' A1 auditory receptors. The peak energy of the ring sounds occurred much closer in frequency to the moth's best auditory frequency (the frequency at which the moth has the lowest auditory threshold) than the peak energy of the calls, for both bat species, and the ring sounds were detected at a threshold 5-6 dB peSPL lower than the calls. Moths performed evasive manoeuvres to playbacks of ring sounds more frequently than they did to control (tape noise) sequences. These neural and behavioural responses imply that certain bats should not be marked with two rings on one wing, as this may make the bat more apparent to tympanate insects, and may therefore reduce its foraging success. Copyright 1999 The Association for the Study of Animal Behaviour.  相似文献   

17.
1.  Within the tonotopic organization of the inferior colliculus two frequency ranges are well represented: a frequency range within that of the echolocation signals from 50 to 100 kHz, and a frequency band below that of the echolocation sounds, from 10 to 35 kHz. The frequency range between these two bands, from about 40 to 50 kHz is distinctly underrepresented (Fig. 3B).
2.  Units with BFs in the lower frequency range (10–25 kHz) were most sensitive with thresholds of -5 to -11 dB SPL, and units with BFs within the frequency range of the echolocation signals had minimal thresholds around 0 dB SPL (Fig. 1).
3.  In the medial part of the rostral inferior colliculus units were encountered which preferentially or exclusively responded to noise stimuli. — Seven neurons were found which were only excited by human breathing noises and not by pure tones, frequency modulated signals or various noise bands. These neurons were considered as a subspeciality of the larger sample of noise-sensitive neurons. — The maximal auditory sensitivity in the frequency range below that of echolocation, and the conspicuous existence of noise and breathing-noise sensitive units in the inferior colliculus are discussed in context with the foraging behavior of vampire bats.
  相似文献   

18.
Fishes have evolved a diversity of sound-generating organs and acoustic signals of various temporal and spectral content. Additionally, representatives of many teleost families such as otophysines, anabantoids, mormyrids and holocentrids possess accessory structures that enhance hearing abilities by acoustically coupling air-filled cavities to the inner ear. Contrary to the accessory hearing structures such as Weberian ossicles in otophysines and suprabranchial chambers in anabantoids, sonic organs do not occur in all members of these taxa. Comparison of audiograms among nine representatives of seven otophysan families from four orders revealed major differences in auditory sensitivity, especially at higher frequencies (> 1 kHz) where thresholds differed by up to 50 dB. These differences showed no apparent correspondence to the ability to produce sounds (vocal versus non-vocal species) or to the spectral content of species-specific sounds. In anabantoids, the lowest auditory thresholds were found in the blue gourami Trichogaster trichopterus, a species not thought to be vocal. Dominant frequencies of sounds corresponded with optimal hearing bandwidth in two out of three vocalizing species. Based on these results, it is concluded that the selective pressures involved in the evolution of accessory hearing structures and in the design of vocal signals were other than those serving to optimize acoustic communication.  相似文献   

19.
This paper reviews the basic aspects of auditory processing that play a role in the perception of speech. The frequency selectivity of the auditory system, as measured using masking experiments, is described and used to derive the internal representation of the spectrum (the excitation pattern) of speech sounds. The perception of timbre and distinctions in quality between vowels are related to both static and dynamic aspects of the spectra of sounds. The perception of pitch and its role in speech perception are described. Measures of the temporal resolution of the auditory system are described and a model of temporal resolution based on a sliding temporal integrator is outlined. The combined effects of frequency and temporal resolution can be modelled by calculation of the spectro-temporal excitation pattern, which gives good insight into the internal representation of speech sounds. For speech presented in quiet, the resolution of the auditory system in frequency and time usually markedly exceeds the resolution necessary for the identification or discrimination of speech sounds, which partly accounts for the robust nature of speech perception. However, for people with impaired hearing, speech perception is often much less robust.  相似文献   

20.
Livshits MS 《Biofizika》2000,45(5):922-926
The study is based on the model of sound perception that involves two systems of measuring the frequency of the sound being perceived. The system of analyzing the periodicity of spike sequence in axons of neurons innervating the internal auditory hair cells excited by the running wave is less precise, but it provides the estimation of the frequency of any periodical sounds. Exact measurement of the frequency of the sinusoidal sound occurs from the spikes in axons of neurones innervating the internal hair cells of the auditory reception field, which uses the entire train of waves excited by this sound in the critical layer of the waveguide of the internal ear cochlea, which corresponds to the frequency of the sound. The octave effect is explained in terms of the fact that the spectrum of frequencies of speech sounds, singing and music coincides with the region of the audibility range in which the impulses of the auditory nerve fibers are synchronized by incoming signals. The octave similarity, i.e., the similarity in the sounding of harmonic signals, whose frequencies relate as even numbers (2:1, etc.), is explained by an unambiguous match between the sound frequency and pulse rate in auditory fibers coming from the auditory reception field. The presence in the brain posterior tubercles of multipeak neurons whose peaks are in octave ratio, confirm the crucial role of the system of exact measurement of frequency in the phenomenon of octave similarity. The phenomenon of diplacusis, which is particularly pronounced in persons with Menier disease, is caused by changes in the position of the auditory reception field in the diseased ear as compared with the healthy ear. The alternating switching of reception from one ear to the other is related to a disturbance of the unitary image of pitch.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号