共查询到20条相似文献,搜索用时 31 毫秒
1.
Background
Previous findings have shown that humans can learn to localize with altered auditory space cues. Here we analyze such learning processes and their effects up to one month on both localization accuracy and sound externalization. Subjects were trained and retested, focusing on the effects of stimulus type in learning, stimulus type in localization, stimulus position, previous experience, externalization levels, and time.Method
We trained listeners in azimuth and elevation discrimination in two experiments. Half participated in the azimuth experiment first and half in the elevation first. In each experiment, half were trained in speech sounds and half in white noise. Retests were performed at several time intervals: just after training and one hour, one day, one week and one month later. In a control condition, we tested the effect of systematic retesting over time with post-tests only after training and either one day, one week, or one month later.Results
With training all participants lowered their localization errors. This benefit was still present one month after training. Participants were more accurate in the second training phase, revealing an effect of previous experience on a different task. Training with white noise led to better results than training with speech sounds. Moreover, the training benefit generalized to untrained stimulus-position pairs. Throughout the post-tests externalization levels increased. In the control condition the long-term localization improvement was not lower without additional contact with the trained sounds, but externalization levels were lower.Conclusion
Our findings suggest that humans adapt easily to altered auditory space cues and that such adaptation spreads to untrained positions and sound types. We propose that such learning depends on all available cues, but each cue type might be learned and retrieved differently. The process of localization learning is global, not limited to stimulus-position pairs, and it differs from externalization processes. 相似文献2.
Background
When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue.Methodology/Principal Findings
In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter''s ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz).Conclusions/Significance
The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing. 相似文献3.
Background
The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking.Methodology/Principal Findings
We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts.Conclusions/Significance
The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion. 相似文献4.
Background
Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.Methodology/Principal Findings
Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.Conclusions/Significance
These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates. 相似文献5.
Background
The Weberian apparatus of otophysine fishes facilitates sound transmission from the swimbladder to the inner ear to increase hearing sensitivity. It has been of great interest to biologists since the 19th century. No studies, however, are available on the development of the Weberian ossicles and its effect on the development of hearing in catfishes.Methodology/Principal Findings
We investigated the development of the Weberian apparatus and auditory sensitivity in the catfish Lophiobagrus cyclurus. Specimens from 11.3 mm to 85.5 mm in standard length were studied. Morphology was assessed using sectioning, histology, and X-ray computed tomography, along with 3D reconstruction. Hearing thresholds were measured utilizing the auditory evoked potentials recording technique. Weberian ossicles and interossicular ligaments were fully developed in all stages investigated except in the smallest size group. In the smallest catfish, the intercalarium and the interossicular ligaments were still missing and the tripus was not yet fully developed. Smallest juveniles revealed lowest auditory sensitivity and were unable to detect frequencies higher than 2 or 3 kHz; sensitivity increased in larger specimens by up to 40 dB, and frequency detection up to 6 kHz. In the size groups capable of perceiving frequencies up to 6 kHz, larger individuals had better hearing abilities at low frequencies (0.05–2 kHz), whereas smaller individuals showed better hearing at the highest frequencies (4–6 kHz).Conclusions/Significance
Our data indicate that the ability of otophysine fish to detect sounds at low levels and high frequencies largely depends on the development of the Weberian apparatus. A significant increase in auditory sensitivity was observed as soon as all Weberian ossicles and interossicular ligaments are present and the chain for transmitting sounds from the swimbladder to the inner ear is complete. This contrasts with findings in another otophysine, the zebrafish, where no threshold changes have been observed. 相似文献6.
Anthony D. Cate Timothy J. Herron E. William Yund G. Christopher Stecker Teemu Rinne Xiaojian Kang Christopher I. Petkov Elizabeth A. Disbrow David L. Woods 《PloS one》2009,4(2)
Background
Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.Methodology/Principal Findings
We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.Conclusions/Significance
Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources. 相似文献7.
Background
Prepulse inhibition (PPI) depicts the effects of a weak sound preceding strong acoustic stimulus on acoustic startle response (ASR). Previous studies suggest that PPI is influenced by physical parameters of prepulse sound such as intensity and preceding time. The present study characterizes the impact of prepulse tone frequency on PPI.Methods
Seven female C57BL mice were used in the present study. ASR was induced by a 100 dB SPL white noise burst. After assessing the effect of background sounds (white noise and pure tones) on ASR, PPI was tested by using prepulse pure tones with the background tone of either 10 or 18 kHz. The inhibitory effect was assessed by measuring and analyzing the changes in the first peak-to-peak magnitude, root mean square value, duration and latency of the ASR as the function of frequency difference between prepulse and background tones.Results
Our data showed that ASR magnitude with pure tone background varied with tone frequency and was smaller than that with white noise background. Prepulse tone systematically reduced ASR as the function of the difference in frequency between prepulse and background tone. The 0.5 kHz difference appeared to be a prerequisite for inducing substantial ASR inhibition. The frequency dependence of PPI was similar under either a 10 or 18 kHz background tone.Conclusion
PPI is sensitive to frequency information of the prepulse sound. However, the critical factor is not tone frequency itself, but the frequency difference between the prepulse and background tones. 相似文献8.
Background
Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.Methodology/Principal Findings
Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.Conclusions/Significance
These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus. 相似文献9.
Background
A stimulus approaching the body requires fast processing and appropriate motor reactions. In monkeys, fronto-parietal networks are involved both in integrating multisensory information within a limited space surrounding the body (i.e. peripersonal space, PPS) and in action planning and execution, suggesting an overlap between sensory representations of space and motor representations of action. In the present study we investigate whether these overlapping representations also exist in the human brain.Methodology/Principal Findings
We recorded from hand muscles motor-evoked potentials (MEPs) induced by single-pulse of transcranial magnetic stimulation (TMS) after presenting an auditory stimulus either near the hand or in far space. MEPs recorded 50 ms after the near-sound onset were enhanced compared to MEPs evoked after far sounds. This near-far modulation faded at longer inter-stimulus intervals, and reversed completely for MEPs recorded 300 ms after the sound onset. At that time point, higher motor excitability was associated with far sounds. Such auditory modulation of hand motor representation was specific to a hand-centred, and not a body-centred reference frame.Conclusions/Significance
This pattern of corticospinal modulation highlights the relation between space and time in the PPS representation: an early facilitation for near stimuli may reflect immediate motor preparation, whereas, at later time intervals, motor preparation relates to distant stimuli potentially approaching the body. 相似文献10.
Background
We physically interact with external stimuli when they occur within a limited space immediately surrounding the body, i.e., Peripersonal Space (PPS). In the primate brain, specific fronto-parietal areas are responsible for the multisensory representation of PPS, by integrating tactile, visual and auditory information occurring on and near the body. Dynamic stimuli are particularly relevant for PPS representation, as they might refer to potential harms approaching the body. However, behavioural tasks for studying PPS representation with moving stimuli are lacking. Here we propose a new dynamic audio-tactile interaction task in order to assess the extension of PPS in a more functionally and ecologically valid condition.Methodology/Principal Findings
Participants vocally responded to a tactile stimulus administered at the hand at different delays from the onset of task-irrelevant dynamic sounds which gave the impression of a sound source either approaching or receding from the subject’s hand. Results showed that a moving auditory stimulus speeded up the processing of a tactile stimulus at the hand as long as it was perceived at a limited distance from the hand, that is within the boundaries of PPS representation. The audio-tactile interaction effect was stronger when sounds were approaching compared to when sounds were receding.Conclusion/Significance
This study provides a new method to dynamically assess PPS representation: The function describing the relationship between tactile processing and the position of sounds in space can be used to estimate the location of PPS boundaries, along a spatial continuum between far and near space, in a valuable and ecologically significant way. 相似文献11.
Background
We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF). DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique.Methods and Findings
Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms) during three minutes to induce ‘Lag Adaptation’. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase.Conclusions
These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound. 相似文献12.
Background
Singing in songbirds is a complex, learned behavior which shares many parallels with human speech. The avian vocal organ (syrinx) has two potential sound sources, and each sound generator is under unilateral, ipsilateral neural control. Different songbird species vary in their use of bilateral or unilateral phonation (lateralized sound production) and rapid switching between left and right sound generation (interhemispheric switching of motor control). Bengalese finches (Lonchura striata domestica) have received considerable attention, because they rapidly modify their song in response to manipulations of auditory feedback. However, how the left and right sides of the syrinx contribute to acoustic control of song has not been studied.Methodology
Three manipulations of lateralized syringeal control of sound production were conducted. First, unilateral syringeal muscular control was eliminated by resection of the left or right tracheosyringeal portion of the hypoglossal nerve, which provides neuromuscular innervation of the syrinx. Spectral and temporal features of song were compared before and after lateralized nerve injury. In a second experiment, either the left or right sound source was devoiced to confirm the role of each sound generator in the control of acoustic phonology. Third, air pressure was recorded before and after unilateral denervation to enable quantification of acoustic change within individual syllables following lateralized nerve resection.Significance
These experiments demonstrate that the left sound source produces louder, higher frequency, lower entropy sounds, and the right sound generator produces lower amplitude, lower frequency, higher entropy sounds. The bilateral division of labor is complex and the frequency specialization is the opposite pattern observed in most songbirds. Further, there is evidence for rapid interhemispheric switching during song production. Lateralized control of song production in Bengalese finches may enhance acoustic complexity of song and facilitate the rapid modification of sound production following manipulations of auditory feedback. 相似文献13.
Umberto Castiello Bruno L. Giordano Chiara Begliomini Caterina Ansuini Massimo Grassi 《PloS one》2010,5(8)
Background
Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements.Methodology/Principal Findings
We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound.Conclusions/Significance
Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action. 相似文献14.
E. I. Knudsen 《Journal of comparative physiology. A, Neuroethology, sensory, neural, and behavioral physiology》1999,185(4):305-321
Sound localization is a computational process that requires the central nervous system to measure various auditory cues and
then associate particular cue values with appropriate locations in space. Behavioral experiments show that barn owls learn
to associate values of cues with locations in space based on experience. The capacity for experience-driven changes in sound
localization behavior is particularly great during a sensitive period that lasts until the approach of adulthood. Neurophysiological
techniques have been used to determine underlying sites of plasticity in the auditory space-processing pathway. The external
nucleus of the inferior colliculus (ICX), where a map of auditory space is synthesized, is a major site of plasticity. Experience
during the sensitive period can cause large-scale, adaptive changes in the tuning of ICX neurons for sound localization cues.
Large-scale physiological changes are accompanied by anatomical remodeling of afferent axons to the ICX. Changes in the tuning
of ICX neurons for cue values involve two stages: (1) the instructed acquisition of neuronal responses to novel cue values
and (2) the elimination of responses to inappropriate cue values. Newly acquired neuronal responses depend differentially
on NMDA receptor currents for their expression. A model is presented that can account for this adaptive plasticity in terms
of plausible cellular mechanisms.
Accepted: 17 April 1999 相似文献
15.
Souta Hidaka Yuko Manaka Wataru Teramoto Yoichi Sugita Ryota Miyauchi Jiro Gyoba Y?iti Suzuki Yukio Iwaya 《PloS one》2009,4(12)
Background
Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.Methodology/Principal Findings
A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.Conclusions/Significance
We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner. 相似文献16.
Background
Data on sex-specific differences in sound production, acoustic behaviour and hearing abilities in fishes are rare. Representatives of numerous catfish families are known to produce sounds in agonistic contexts (intraspecific aggression and interspecific disturbance situations) using their pectoral fins. The present study investigates differences in agonistic behaviour, sound production and hearing abilities in males and females of a callichthyid catfish.Methodology/Principal Findings
Eight males and nine females of the armoured catfish Megalechis thoracata were investigated. Agonistic behaviour displayed during male-male and female-female dyadic contests and sounds emitted were recorded, sound characteristics analysed and hearing thresholds measured using the auditory evoked potential (AEP) recording technique. Male pectoral spines were on average 1.7-fold longer than those of same-sized females. Visual and acoustic threat displays differed between sexes. Males produced low-frequency harmonic barks at longer distances and thumps at close distances, whereas females emitted broad-band pulsed crackles when close to each other. Female aggressive sounds were significantly shorter than those of males (167 ms versus 219 to 240 ms) and of higher dominant frequency (562 Hz versus 132 to 403 Hz). Sound duration and sound level were positively correlated with body and pectoral spine length, but dominant frequency was inversely correlated only to spine length. Both sexes showed a similar U-shaped hearing curve with lowest thresholds between 0.2 and 1 kHz and a drop in sensitivity above 1 kHz. The main energies of sounds were located at the most sensitive frequencies.Conclusions/Significance
Current data demonstrate that both male and female M. thoracata produce aggressive sounds, but the behavioural contexts and sound characteristics differ between sexes. Sexes do not differ in hearing, but it remains to be clarified if this is a general pattern among fish. This is the first study to describe sex-specific differences in agonistic behaviour in fishes. 相似文献17.
Martin Singheiser Dennis T. T. Plachta Sandra Brill Peter Bremen Robert F. van der Willigen Hermann Wagner 《Journal of comparative physiology. A, Neuroethology, sensory, neural, and behavioral physiology》2010,196(3):227-240
We studied the influence of frequency on sound localization in free-flying barn owls by quantifying aspects of their target-approaching
behavior to a distant sound source during ongoing auditory stimulation. In the baseline condition with a stimulus covering
most of the owls hearing range (1–10 kHz), all owls landed within a radius of 20 cm from the loudspeaker in more than 80%
of the cases and localization along the azimuth was more accurate than localization in elevation. When the stimulus contained
only high frequencies (>5 kHz) no changes in striking behavior were observed. But when only frequencies from 1 to 5 kHz were
presented, localization accuracy and precision decreased. In a second step we tested whether a further border exists at 2.5 kHz
as suggested by optimality models. When we compared striking behavior for a stimulus having energy from 2.5 to 5 kHz with
a stimulus having energy between 1 and 2.5 kHz, no consistent differences in striking behavior were observed. It was further
found that pre-takeoff latency was longer for the latter stimulus than for baseline and that center frequency was a better
predictor for landing precision than stimulus bandwidth. These data fit well with what is known from head-turning studies
and from neurophysiology. 相似文献
18.
Background
Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.Methodology/Principal Findings
Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.Conclusions/Significance
The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object. 相似文献19.
Objective
Interaural level difference (ILD) is the difference in sound pressure level (SPL) between the two ears and is one of the key physical cues used by the auditory system in sound localization. Our current understanding of ILD encoding has come primarily from invasive studies of individual structures, which have implicated subcortical structures such as the cochlear nucleus (CN), superior olivary complex (SOC), lateral lemniscus (LL), and inferior colliculus (IC). Noninvasive brain imaging enables studying ILD processing in multiple structures simultaneously.Methods
In this study, blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) is used for the first time to measure changes in the hemodynamic responses in the adult Sprague-Dawley rat subcortex during binaural stimulation with different ILDs.Results and Significance
Consistent responses are observed in the CN, SOC, LL, and IC in both hemispheres. Voxel-by-voxel analysis of the change of the response amplitude with ILD indicates statistically significant ILD dependence in dorsal LL, IC, and a region containing parts of the SOC and LL. For all three regions, the larger amplitude response is located in the hemisphere contralateral from the higher SPL stimulus. These findings are supported by region of interest analysis. fMRI shows that ILD dependence occurs in both hemispheres and multiple subcortical levels of the auditory system. This study is the first step towards future studies examining subcortical binaural processing and sound localization in animal models of hearing. 相似文献20.