首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Six adult Java sparrows were trained to discriminate between consonant and dissonant sounds consisting of three tones. In the consonance group, the perching response was reinforced when consonance was presented, but not when dissonance was presented. Both groups were given an inversion test, in which the first inversion of the chord was used as a stimulus. Four of six birds learned the discrimination and were given two tests. In the first test, novel consonances and novel dissonances were presented. All birds maintained the discrimination. When inverted consonances and dissonances were presented in the second test, the discriminative behavior was not well demonstrated. When novel dissonances consisting of tones with different intervals were presented in the third test, birds trained to perch for dissonance performed well, whereas those trained to perch for consonance did not. In summary, Java sparrows were able to discriminate between consonances and dissonances and demonstrated generalization to new combinations, they do not discriminate the same consonances and dissonances.  相似文献   

2.
It has been shown that humans prefer consonant sounds from the early stages of development. From a comparative psychological perspective, although previous studies have shown that birds and monkeys can discriminate between consonant and dissonant sounds, it remains unclear whether nonhumans have a spontaneous preference for consonant music over dissonant music as humans do. We report here that a five-month-old human-raised chimpanzee (Pan troglodytes) preferred consonant music. The infant chimpanzee consistently preferred to produce, with the aid of our computerized setup, consonant versions of music for a longer duration than dissonant versions. This result suggests that the preference for consonance is not unique to humans. Further, it supports the hypothesis that one major basis of musical appreciation has some evolutionary origins.  相似文献   

3.
A subset of neurons in the cochlear nucleus (CN) of the auditory brainstem has the ability to enhance the auditory nerve''s temporal representation of stimulating sounds. These neurons reside in the ventral region of the CN (VCN) and are usually known as highly synchronized, or high-sync, neurons. Most published reports about the existence and properties of high-sync neurons are based on recordings performed on a VCN output tract—not the VCN itself—of cats. In other species, comprehensive studies detailing the properties of high-sync neurons, or even acknowledging their existence, are missing.Examination of the responses of a population of VCN neurons in chinchillas revealed that a subset of those neurons have temporal properties similar to high-sync neurons in the cat. Phase locking and entrainment—the ability of a neuron to fire action potentials at a certain stimulus phase and at almost every stimulus period, respectively—have similar maximum values in cats and chinchillas. Ranges of characteristic frequencies for high-sync neurons in chinchillas and cats extend up to 600 and 1000 Hz, respectively. Enhancement of temporal processing relative to auditory nerve fibers (ANFs), which has been shown previously in cats using tonal and white-noise stimuli, is also demonstrated here in the responses of VCN neurons to synthetic and spoken vowel sounds.Along with the large amount of phase locking displayed by some VCN neurons there occurs a deterioration in the spectral representation of the stimuli (tones or vowels). High-sync neurons exhibit a greater distortion in their responses to tones or vowels than do other types of VCN neurons and auditory nerve fibers.Standard deviations of first-spike latency measured in responses of high-sync neurons are lower than similar values measured in ANFs'' responses. This might indicate a role of high-sync neurons in other tasks beyond sound localization.  相似文献   

4.
Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach–avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.  相似文献   

5.
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.  相似文献   

6.
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance—the capacity to make sense of complex ‘auditory scenes’ is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the ‘stochastic figure-ground’ stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a ‘game’ featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.  相似文献   

7.
Time-reversal symmetry breaking is a key feature of many classes of natural sounds, originating in the physics of sound production. While attention has been paid to the response of the auditory system to “natural stimuli,” very few psychophysical tests have been performed. We conduct psychophysical measurements of time-frequency acuity for stylized representations of “natural”-like notes (sharp attack, long decay) and the time-reversed versions of these notes (long attack, sharp decay). Our results demonstrate significantly greater precision, arising from enhanced temporal acuity, for such sounds over their time-reversed versions, without a corresponding decrease in frequency acuity. These data inveigh against models of auditory processing that include tradeoffs between temporal and frequency acuity, at least in the range of notes tested and suggest the existence of statistical priors for notes with a sharp-attack and a long-decay. We are additionally able to calculate a minimal theoretical bound on the sophistication of the nonlinearities in auditory processing. We find that among the best studied classes of nonlinear time-frequency representations, only matching pursuit, spectral derivatives, and reassigned spectrograms are able to satisfy this criterion.  相似文献   

8.
Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain.  相似文献   

9.
Cats were stimulated with tones and with natural sounds selected from the normal acoustic environment of the animal. Neural activity evoked by the natural sounds and tones was recorded in the cochlear nucleus and in the medial geniculate body. The set of biological sounds proved to be effective in influencing neural activity of single cells at both levels in the auditory system. At the level of the cochlear nucleus the response of a neuron evoked by a natural sound stimulus could be understood reasonably well on the basis of the structure of the spectrograms of the natural sounds and the unit's responses to tones. At the level of the medial geniculate body analysis with tones did not provide sufficient information to explain the responses to natural sounds. At this level the use of an ensemble of natural sound stimuli allows the investigation of neural properties, which are not seen by analysis with simple artificial stimuli. Guidelines for the construction of an ensemble of complex natural sound stimuli, based on the ecology and ethology of the animal under investigation are discussed. This stimulus ensemble is defined as the Acoustic Biotope.  相似文献   

10.
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.  相似文献   

11.
The auditory system creates a neuronal representation of the acoustic world based on spectral and temporal cues present at the listener''s ears, including cues that potentially signal the locations of sounds. Discrimination of concurrent sounds from multiple sources is especially challenging. The current study is part of an effort to better understand the neuronal mechanisms governing this process, which has been termed “auditory scene analysis”. In particular, we are interested in spatial release from masking by which spatial cues can segregate signals from other competing sounds, thereby overcoming the tendency of overlapping spectra and/or common temporal envelopes to fuse signals with maskers. We studied detection of pulsed tones in free-field conditions in the presence of concurrent multi-tone non-speech maskers. In “energetic” masking conditions, in which the frequencies of maskers fell within the ±1/3-octave band containing the signal, spatial release from masking at low frequencies (∼600 Hz) was found to be about 10 dB. In contrast, negligible spatial release from energetic masking was seen at high frequencies (∼4000 Hz). We observed robust spatial release from masking in broadband “informational” masking conditions, in which listeners could confuse signal with masker even though there was no spectral overlap. Substantial spatial release was observed in conditions in which the onsets of the signal and all masker components were synchronized, and spatial release was even greater under asynchronous conditions. Spatial cues limited to high frequencies (>1500 Hz), which could have included interaural level differences and the better-ear effect, produced only limited improvement in signal detection. Substantially greater improvement was seen for low-frequency sounds, for which interaural time differences are the dominant spatial cue.  相似文献   

12.
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.  相似文献   

13.
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.  相似文献   

14.
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.  相似文献   

15.
The fish auditory system encodes important acoustic stimuli used in social communication, but few studies have examined response properties of central auditory neurons to natural signals. We determined the features and responses of single hindbrain and midbrain auditory neurons to tone bursts and playbacks of conspecific sounds in the soniferous damselfish, Abudefduf abdominalis. Most auditory neurons were either silent or had slow irregular resting discharge rates <20 spikes s−1. Average best frequency for neurons to tone stimuli was ~130 Hz but ranged from 80 to 400 Hz with strong phase-locking. This low-frequency sensitivity matches the frequency band of natural sounds. Auditory neurons were also modulated by playbacks of conspecific sounds with thresholds similar to 100 Hz tones, but these thresholds were lower than that of tones at other test frequencies. Thresholds of neurons to natural sounds were lower in the midbrain than the hindbrain. This is the first study to compare response properties of auditory neurons to both simple tones and complex stimuli in the brain of a recently derived soniferous perciform that lacks accessory auditory structures. These data demonstrate that the auditory fish brain is most sensitive to the frequency and temporal components of natural pulsed sounds that provide important signals for conspecific communication.  相似文献   

16.

Background

Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown.

Methodology/Principal Findings

Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons.

Conclusions/Significance

These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.  相似文献   

17.
Pulse-resonance sounds play an important role in animal communication and auditory object recognition, yet very little is known about the cortical representation of this class of sounds. In this study we shine light on one simple aspect: how well does the firing rate of cortical neurons resolve resonant (“formant”) frequencies of vowel-like pulse-resonance sounds. We recorded neural responses in the primary auditory cortex (A1) of anesthetized rats to two-formant pulse-resonance sounds, and estimated their formant resolving power using a statistical kernel smoothing method which takes into account the natural variability of cortical responses. While formant-tuning functions were diverse in structure across different penetrations, most were sensitive to changes in formant frequency, with a frequency resolution comparable to that reported for rat cochlear filters.  相似文献   

18.
In vivo intracellular responses to auditory stimuli revealed that, in a particular population of cells of the ventral nucleus of the lateral lemniscus (VNLL) of rats, fast inhibition occurred before the first action potential. These experimental data were used to constrain a leaky integrate-and-fire (LIF) model of the neurons in this circuit. The post-synaptic potentials of the VNLL cell population were characterized using a method of triggered averaging. Analysis suggested that these inhibited VNLL cells produce action potentials in response to a particular magnitude of the rate of change of their membrane potential. The LIF model was modified to incorporate the VNLL cells’ distinctive action potential production mechanism. The model was used to explore the response of the population of VNLL cells to simple speech-like sounds. These sounds consisted of a simple tone modulated by a saw tooth with exponential decays, similar to glottal pulses that are the repeated impulses seen in vocalizations. It was found that the harmonic component of the sound was enhanced in the VNLL cell population when compared to a population of auditory nerve fibers. This was because the broadband onset noise, also termed spectral splatter, was suppressed by the fast onset inhibition. This mechanism has the potential to greatly improve the clarity of the representation of the harmonic content of certain kinds of natural sounds.  相似文献   

19.
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects'' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance''s difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production.  相似文献   

20.
Participants were requested to respond to a sequence of visual targets while listening to a well-known lullaby. One of the notes in the lullaby was occasionally exchanged with a pattern deviant. Experiment 1 found that deviants capture attention as a function of the pitch difference between the deviant and the replaced/expected tone. However, when the pitch difference between the expected tone and the deviant tone is held constant, a violation to the direction-of-pitch change across tones can also capture attention (Experiment 2). Moreover, in more complex auditory environments, wherein it is difficult to build a coherent neural model of the sound environment from which expectations are formed, deviations can capture attention but it appears to matter less whether this is a violation from a specific stimulus or a violation of the current direction-of-change (Experiment 3). The results support the expectation violation account of auditory distraction and suggest that there are at least two different expectations that can be violated: One appears to be bound to a specific stimulus and the other would seem to be bound to a more global cross-stimulus rule such as the direction-of-change based on a sequence of preceding sound events. Factors like base-rate probability of tones within the sound environment might become the driving mechanism of attentional capture—rather than violated expectations—in complex sound environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号