首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people''s general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.  相似文献   

2.

Background

Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset.

Methodology

Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz), while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz), such that the restored fundamental (also knows as “virtual pitch”) changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component.

Principal Findings

We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch) counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies.

Conclusions

Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.  相似文献   

3.
BACKGROUND: Subitizing involves recognition mechanisms that allow effortless enumeration of up to four visual objects, however despite ample resolution experimental data suggest that only one pitch can be reliably enumerated. This may be due to the grouping of tones according to harmonic relationships by recognition mechanisms prior to fine pitch processing. Poorer frequency resolution of auditory information available to recognition mechanisms may lead to unrelated tones being grouped, resulting in underestimation of pitch number. METHODS, RESULTS AND CONCLUSION: We tested whether pitch enumeration is better for chords of full harmonic complex tones, where grouping errors are less likely, than for complexes with fewer and less accurately tuned harmonics. Chords of low familiarity were used to mitigate the possibility that participants would recognize the chord itself and simply recall the number of pitches. We found that accuracy of pitch enumeration was less than the visual system overall, and underestimation of pitch number increased for stimuli containing fewer harmonics. We conclude that harmonically related tones are first grouped at the poorer frequency resolution of the auditory nerve, leading to poor enumeration of more than one pitch.  相似文献   

4.
 Perception of complex communication sounds is a major function of the auditory system. To create a coherent percept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as “combination-sensitivity,” are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to “recognize” the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing. Received: 6 October 2001 / Accepted in revised form: 21 January 2002  相似文献   

5.
Wile D  Balaban E 《PloS one》2007,2(4):e369
Current theories of auditory pitch perception propose that cochlear place (spectral) and activity timing pattern (temporal) information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, "missing fundamental" pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope), and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places) carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.  相似文献   

6.
The human auditory system is sensitive in detecting “mistuned” components in a harmonic complex, which do not match the frequency pattern defined by the fundamental frequency of the complex. Depending on the frequency configuration, the mistuned component may be perceptually segregated from the complex and may be heard as a separate tone. In the context of a masking experiment, mistuning a single component decreases its masked threshold. In this study we propose to quantify the ability to detect a single component for fixed amounts of mistuning by adaptively varying its level. This method produces masking release by mistuning that can be compared to other masking release effects. Detection thresholds were obtained for various frequency configurations where the target component was resolved or unresolved in the auditory system. The results from 6 normal-hearing listeners show a significant decrease of masked thresholds between harmonic and mistuned conditions in all configurations and provide evidence for the employment of different detection strategies for resolved and unresolved components. The data suggest that across-frequency processing is involved in the release from masking. The results emphasize the ability of this method to assess integrative aspects of pitch and harmonicity perception.  相似文献   

7.
Timbre and pitch are two independent perceptual qualities of sounds closely related to the spectral envelope and to the fundamental frequency of periodic temporal envelope fluctuations, respectively. To a first approximation, the spectral and temporal tuning properties of neurons in the auditory midbrain of various animals are independent, with layouts of these tuning properties in approximately orthogonal tonotopic and periodotopic maps. For the first time we demonstrate by means of magnetoencephalography a periodotopic organization of the human auditory cortex and analyse its spatial relationship to the tonotopic organization by using a range of stimuli with different temporal envelope fluctuations and spectra and a magnetometer providing high spatial resolution. We demonstrate an orthogonal arrangement of tonotopic and periodotopic gradients. Our results are in line with the organization of such maps in animals and closely match the perceptual orthogonality of timbre and pitch in humans. Accepted: 25 July 1997  相似文献   

8.
Discharges in cochlear nerve fibers evoked by low frequency phase-locked sinusoidal acoustic stimuli are synchronized to the stimulus waveform. Excitation and suppression regions of single units were explored using a stimulus composed of either a fixed intensity test tone at the characteristic frequency, a variable intensity interfering tone with a simple integer frequency relation to the characteristic frequency, or both. Compound period histograms were constructed from period histograms in response to normal and reversed polarity stimuli. Discharge patterns were characterized by Fourier components of the histogram envelopes. The two stimulus frequencies constituted the principal harmonics in the histogram envelopes and their combination accounted for observed rate changes. Suppression of the test tone harmonic as a function of interfering tone intensity was always seen; rate suppression was not. The harmonic was typically suppressed by 20–30 dB compared to the value for the test tone alone and often reached the 40–60 dB resolution limit of the experiment. Suppression plots were nearly linear on a power scale with an average slope of-0.8. The onset of suppression occurred for an interfering tone 9 dB greater on average than the test tone intensity. Information transfer through the peripheral system was described by the ratio of the principal harmonic amplitudes versus the ratio of the intensities of the two stimulus tones. These plots were nearly linear on a power scale with an average slope of 0.9. Neither the onset of suppression nor the slopes of the harmonic plots displayed strong dependence on characteristic frequency or interfering tone frequency. These features of harmonic behavior, however, are closely related to system nonlinearity. Comparison of measured harmonics to the predictions of two phenomenological models suggest the presence of complex nonlinear transformations in the peripheral auditory system.  相似文献   

9.
A number of nonhuman primates produce vocalizations with time-varying harmonic structure. Relatively little is known about whether such spectral information plays a role in call type classification. We address this problem by utilizing acoustic analyses and playback experiments on cottontop tamarins‘ combi nation long call, a species-typical vocalization with a characteristic harmonic structure. Specifically, we used habituation-discrimination experiments to test whether particular frequency components, as well as the relationship between components, have an effect on the perception and classification of long calls. In Condition 1, we show that tamarins classify natural and synthetic exemplars of the long call as perceptually similar, thereby allowing us to use synthetics to manipulate components of this signal precisely. In subsequent conditions, we tested the perceptual salience and discriminability of long calls in which we deleted (1) the second harmonic, (2) the fundamental frequency, or (3) all frequencies above the fundamental; we also examined the effects of frequency mistuning by shifting the second harmonic by 1000 Hz. Following habituation to unmanipulated long calls, tamarins did not respond (transferred habituation) to long calls with either a missing fundamental frequency or the second harmonic, but responded (discriminated) to long calls with the upper harmonics eliminated or with the second harmonic mistuned. These studies reveal the importance of harmonic structure in tamarin perception, and highlight the advantages of using synthetic signals for understanding how particular acoustic features drive perceptual classification in nonhuman primates. Copyright 2002 The Association for the Study of Animal Behaviour. Published by Elsevier Science Ltd. All rights reserved.  相似文献   

10.
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.  相似文献   

11.
Metabolic response coefficients describe how variables in metabolic systems, like steady state concentrations, respond to small changes of kinetic parameters. To extend this concept to temporal parameter fluctuations, we define spectral response coefficients that relate Fourier components of concentrations and fluxes to Fourier components of the underlying parameters. It is also straightforward to generalize other concepts from metabolic control theory, such as control coefficients with their summation and connectivity theorems. The first-order response coefficients describe forced oscillations caused by small harmonic oscillations of single parameters: they depend on the driving frequency and comprise the phases and amplitudes of the concentrations and fluxes. Close to a Hopf bifurcation, resonance can occur: as an example, we study the spectral densities of concentration fluctuations arising from the stochastic nature of chemical reactions. Second-order response coefficients describe how perturbations of different frequencies interact by mode coupling, yielding higher harmonics in the metabolic response. The temporal response to small parameter fluctuations can be computed by Fourier synthesis. For a model of glycolysis, this approximation remains fairly accurate even for large relative fluctuations of the parameters.  相似文献   

12.
A new functional model was developed for timbre differences of steady harmonic complex tones. One part of the model takes account of the ear's ability of separating the first six to eight partials of a complex tone. The calculation of timbre differences is based on the availability of resolvable partials and on a measure of their similarity in timbre quality, which decreases with increasing frequency distance. By this part of the model, timbre differences can be explained of complex tones with identical spectral envelope but different frequencies and/or amplitudes of constituent components. By another part of the model, the concept of sharpness is used which was found to be suitable to account for timbre differences between complex tones of different spectral envelopes.  相似文献   

13.
Denham S 《Bio Systems》2005,79(1-3):199-206
Iterated ripple noise (IRN) is a broadband noise with temporal regularities, which can give rise to a perceptible pitch. Since the perceptual pitch to noise ratio of these stimuli can be altered without substantially altering their spectral content, they have been useful in exploring the role of temporal processing in pitch perception [Yost, W.A., 1996. Pitch strength of iterated rippled noise, J. Acoust. Soc. Am. 100 (5), 3329-3335; Patterson, R.D., Handel, S.,Yost, W.A., Datta, A.J., 1996. The relative strength of the tone and noise components in iterated rippled noise, J. Acoust. Soc. Am. 100 (5), 3286-3294]. A generalised IRN algorithm is presented, in which multiple time varying temporal correlations can be defined. The resulting time varying pitches are perceptually very salient. It is also possible to segregate and track multiple simultaneous time varying pitches in these stimuli. Temporal auditory models have previously been shown to account for the perception of IRNs with static delays [Patterson, R.D., Handel, S.,Yost, W.A., Datta, A.J., 1996. The relative strength of the tone and noise components in iterated rippled noise, J. Acoust. Soc. Am. 100 (5), 3286-3294]. Here we show that some simple modifications to one such model [Meddis R., Hewitt, M.J., 1991. Virtual pitch and phase sensitivity of a computer model of the auditory periphery I. Pitch identification, J. Acoust. Soc. Am. 89, 2866-2882] allow it to track moving correlations, and also improve its performance in response to static correlations.  相似文献   

14.
Frequency resolution and spectral filtering in the cat primary auditory cortex (AI) were mapped by extracellular recordings of tone responses in white noise of various bandwidths. Single-tone excitatory tuning curves, critical bandwidths, and critical ratios were determined as a function of neuronal characteristic frequency and tone level. Single-tone excitatory tuning curves are inadequate measures of frequency resolution and spectral filtering in the AI, because their shapes (in most neurons) deviated substantially from the shapes of “tuning curves for complex sound analysis”, the curves determined by the band limits of the critical bandwidths. Perceptual characteristics of spectral filtering (intensity independence and frequency dependence) were found in average critical bandwidths of neurons from the central and ventral AI. The highest frequency resolution (smallest critical bandwidths) reached by neurons in the central and ventral AI equaled the psychophysical frequency resolution. The dorsal AI is special, since most neurons there had response properties incompatible with psychophysical features of frequency resolution. Perceptual characteristics of critical ratios were not found in the average neuronal responses in any area of the AI. It seems that spectral integration in the way proposed to be the basis for the perception of tones in noise is not present at the level of the AI. Accepted: 21 July 1997  相似文献   

15.
The ‘3-second rule’ has been proposed based on miscellaneous observations that a time period of around 3 seconds constitutes the fundamental unit of time related to the neuro-cognitive machinery in normal humans. The aim of paper was to investigate the temporal processing in patients with spinocerebellar ataxia type 6 (SCA6) and SCA31, pure cerebellar types of spinocerebellar degeneration, using a synchronized tapping task. Seventeen SCA patients (11 SCA6, 6 SCA31) and 17 normal age-matched volunteers participated. The task required subjects to tap a keyboard in synchrony with sequences of auditory stimuli presented at fixed interstimulus intervals (ISIs) between 200 and 4800 ms. In this task, the subjects required non-motor components to estimate the time of forthcoming tone in addition to motor components to tap. Normal subjects synchronized their taps to the presented tones at shorter ISIs, whereas as the ISI became longer, the normal subjects displayed greater latency between the tone and the tapping (transition zone). After the transition zone, normal subjects pressed the button delayed relative to the tone. On the other hand, SCA patients could not synchronize their tapping with the tone even at shorter ISIs, although they pressed the button delayed relative to the tone earlier than normal subjects did. The earliest time of delayed tapping appearance after the transition zone was 4800 ms in normal subjects but 1800 ms in SCA patients. The span of temporal integration in SCA patients is shortened compared to that in normal subjects. This could represent non-motor cerebellar dysfunction in SCA patients.  相似文献   

16.
Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with `streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.  相似文献   

17.
Chronic tinnitus seems to be caused by reduced inhibition among frequency selective neurons in the auditory cortex. One possibility to reduce tinnitus perception is to induce inhibition onto over-activated neurons representing the tinnitus frequency via tailor-made notched music (TMNM). Since lateral inhibition is modifiable by spectral energy contrasts, the question arises if the effects of inhibition-induced plasticity can be enhanced by introducing increased spectral energy contrasts (ISEC) in TMNM. Eighteen participants suffering from chronic tonal tinnitus, pseudo randomly assigned to either a classical TMNM or an ISEC-TMNM group, listened to notched music for three hours on three consecutive days. The music was filtered for both groups by introducing a notch filter centered at the individual tinnitus frequency. For the ISEC-TMNM group a frequency bandwidth of 3/8 octaves on each side of the notch was amplified, additionally, by about 20 dB. Before and after each music exposure, participants rated their subjectively perceived tinnitus loudness on a visual analog scale. During the magnetoencephalographic recordings, participants were stimulated with either a reference tone of 500 Hz or a test tone with a carrier frequency representing the individual tinnitus pitch. Perceived tinnitus loudness was significantly reduced after TMNM exposure, though TMNM type did not influence the loudness ratings. Tinnitus related neural activity in the N1m time window and in the so called tinnitus network comprising temporal, parietal and frontal regions was reduced after TMNM exposure. The ISEC-TMNM group revealed even enhanced inhibition-induced plasticity in a temporal and a frontal cortical area. Overall, inhibition of tinnitus related neural activity could be strengthened in people affected with tinnitus by increasing spectral energy contrast in TMNM, confirming the concepts of inhibition-induced plasticity via TMNM and spectral energy contrasts.  相似文献   

18.
Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.  相似文献   

19.
Amplitude-modulated processes can be formally presented as a product of two or more sinusoids. This makes it possible to study them by means of analysis of multiplicative phenomena using the Fast Fourier Transform (FFT). To assess the contribution of amplitude EEG modulation to the dynamic of electrical activity of the human brain, the results of the FFT of simulated signals obtained by multiplication of oscillatory processes with different parameters were compared with the results of the FFT of a single EEG recording from a subject at rest. We studied the temporal dynamics of spectral components calculated with different spectral resolution under similar conditions for real and simulated signals. An attempt was made to analyze and interpret the amplitude-modulated EEG processes using the additive properties of the FTT. It was shown that processes of amplitude modulation are present in electrical brain activity and determine the synchronism of changes in time in the majority of frequency components of the EEG spectrum. The presence of the amplitude modulation in bioelectrical processes is of a fundamental nature, since it is a direct reflection of the control, synchronization, regulation, and intersystem interaction in the nervous and other body systems. The study of this modulation gives a clue to the mechanisms of these processes.  相似文献   

20.
A new method and application is proposed to characterize intensity and pitch of human heart sounds and murmurs. Using recorded heart sounds from the library of one of the authors, a visual map of heart sound energy was established. Both normal and abnormal heart sound recordings were studied. Representation is based on Wigner-Ville joint time-frequency transformations. The proposed methodology separates acoustic contributions of cardiac events simultaneously in pitch, time and energy. The resolution accuracy is superior to any other existing spectrogram method. The characteristic energy signature of the innocent heart murmur in a child with the S3 sound is presented. It allows clear detection of S1, S2 and S3 sounds, S2 split, systolic murmur, and intensity of these components. The original signal, heart sound power change with time, time-averaged frequency, energy density spectra and instantaneous variations of power and frequency/pitch with time, are presented. These data allow full quantitative characterization of heart sounds and murmurs. High accuracy in both time and pitch resolution is demonstrated. Resulting visual images have self-referencing quality, whereby individual features and their changes become immediately obvious.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号