首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

One common criterion for classifying electrophysiological brain responses is based on the distinction between transient (i.e. event-related potentials, ERPs) and steady-state responses (SSRs). The generation of SSRs is usually attributed to the entrainment of a neural rhythm driven by the stimulus train. However, a more parsimonious account suggests that SSRs might result from the linear addition of the transient responses elicited by each stimulus. This study aimed to investigate this possibility.

Methodology/Principal Findings

We recorded brain potentials elicited by a checkerboard stimulus reversing at different rates. We modeled SSRs by sequentially shifting and linearly adding rate-specific ERPs. Our results show a strong resemblance between recorded and synthetic SSRs, supporting the superposition hypothesis. Furthermore, we did not find evidence of entrainment of a neural oscillation at the stimulation frequency.

Conclusions/Significance

This study provides evidence that visual SSRs can be explained as a superposition of transient ERPs. These findings have critical implications in our current understanding of brain oscillations. Contrary to the idea that neural networks can be tuned to a wide range of frequencies, our findings rather suggest that the oscillatory response of a given neural network is constrained within its natural frequency range.  相似文献   

2.
Besides the intensity and frequency of an auditory stimulus, the length of time that precedes the stimulation is an important factor that determines the magnitude of early evoked neural responses in the auditory cortex. Here we used chinchillas to demonstrate that the length of the silent period before the presentation of an auditory stimulus is a critical factor that modifies late oscillatory responses in the auditory cortex. We used tetrodes to record local-field potential (LFP) signals from the left auditory cortex of ten animals while they were stimulated with clicks, tones or noise bursts delivered at different rates and intensity levels. We found that the incidence of oscillatory activity in the auditory cortex of anesthetized chinchillas is dependent on the period of silence before stimulation and on the intensity of the auditory stimulus. In 62.5% of the recordings sites we found stimulus-related oscillations at around 8-20 Hz. Stimulus-induced oscillations were largest and consistent when stimuli were preceded by 5 s of silence and they were absent when preceded by less than 500 ms of silence. These results demonstrate that the period of silence preceding the stimulus presentation and the stimulus intensity are critical factors for the presence of these oscillations.  相似文献   

3.
Ward LM  MacLean SE  Kirschner A 《PloS one》2010,5(12):e14371
Neural synchronization is a mechanism whereby functionally specific brain regions establish transient networks for perception, cognition, and action. Direct addition of weak noise (fast random fluctuations) to various neural systems enhances synchronization through the mechanism of stochastic resonance (SR). Moreover, SR also occurs in human perception, cognition, and action. Perception, cognition, and action are closely correlated with, and may depend upon, synchronized oscillations within specialized brain networks. We tested the hypothesis that SR-mediated neural synchronization occurs within and between functionally relevant brain areas and thus could be responsible for behavioral SR. We measured the 40-Hz transient response of the human auditory cortex to brief pure tones. This response arises when the ongoing, random-phase, 40-Hz activity of a group of tuned neurons in the auditory cortex becomes synchronized in response to the onset of an above-threshold sound at its "preferred" frequency. We presented a stream of near-threshold standard sounds in various levels of added broadband noise and measured subjects' 40-Hz response to the standards in a deviant-detection paradigm using high-density EEG. We used independent component analysis and dipole fitting to locate neural sources of the 40-Hz response in bilateral auditory cortex, left posterior cingulate cortex and left superior frontal gyrus. We found that added noise enhanced the 40-Hz response in all these areas. Moreover, added noise also increased the synchronization between these regions in alpha and gamma frequency bands both during and after the 40-Hz response. Our results demonstrate neural SR in several functionally specific brain regions, including areas not traditionally thought to contribute to the auditory 40-Hz transient response. In addition, we demonstrated SR in the synchronization between these brain regions. Thus, both intra- and inter-regional synchronization of neural activity are facilitated by the addition of moderate amounts of random noise. Because the noise levels in the brain fluctuate with arousal system activity, particularly across sleep-wake cycles, optimal neural noise levels, and thus SR, could be involved in optimizing the formation of task-relevant brain networks at several scales under normal conditions.  相似文献   

4.
Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or “entrained”) to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research—including neural entrainment and tACS—and reveal endogenous neural oscillations as a key underlying principle for speech perception.

Just as a child on a swing continues to move after the pushing stops, this study reveals similar entrained rhythmic echoes in brain activity after hearing speech and electrical brain stimulation; perturbation with tACS shows that these brain oscillations help listeners to understand speech.  相似文献   

5.
In this paper we use empirical loudness modeling to explore a perceptual sub-category of the dynamic range problem of auditory neuroscience. Humans are able to reliably report perceived intensity (loudness), and discriminate fine intensity differences, over a very large dynamic range. It is usually assumed that loudness and intensity change detection operate upon the same neural signal, and that intensity change detection may be predicted from loudness data and vice versa. However, while loudness grows as intensity is increased, improvement in intensity discrimination performance does not follow the same trend and so dynamic range estimations of the underlying neural signal from loudness data contradict estimations based on intensity just-noticeable difference (JND) data. In order to account for this apparent paradox we draw on recent advances in auditory neuroscience. We test the hypothesis that a central model, featuring central adaptation to the mean loudness level and operating on the detection of maximum central-loudness rate of change, can account for the paradoxical data. We use numerical optimization to find adaptation parameters that fit data for continuous-pedestal intensity change detection over a wide dynamic range. The optimized model is tested on a selection of equivalent pseudo-continuous intensity change detection data. We also report a supplementary experiment which confirms the modeling assumption that the detection process may be modeled as rate-of-change. Data are obtained from a listening test (N = 10) using linearly ramped increment-decrement envelopes applied to pseudo-continuous noise with an overall level of 33 dB SPL. Increments with half-ramp durations between 5 and 50,000 ms are used. The intensity JND is shown to increase towards long duration ramps (p<10−6). From the modeling, the following central adaptation parameters are derived; central dynamic range of 0.215 sones, 95% central normalization, and a central loudness JND constant of 5.5×10−5 sones per ms. Through our findings, we argue that loudness reflects peripheral neural coding, and the intensity JND reflects central neural coding.  相似文献   

6.
The ability to discriminate species and recognize individuals is crucial for reproductive success and/or survival in most animals. However, the temporal order and neural localization of these decision-making processes has remained unclear. In this study, event-related potentials (ERPs) were measured in the telencephalon, diencephalon, and mesencephalon of the music frog Nidirana daunchina. These ERPs were elicited by calls from 1 group of heterospecifics (recorded from a sympatric anuran species) and 2 groups of conspecifics that differed in their fundamental frequencies. In terms of the polarity and position within the ERP waveform, auditory ERPs generally consist of 4 main components that link to selective attention (N1), stimulus evaluation (P2), identification (N2), and classification (P3). These occur around 100, 200, 250, and 300 ms after stimulus onset, respectively. Our results show that the N1 amplitudes differed significantly between the heterospecific and conspecific calls, but not between the 2 groups of conspecific calls that differed in fundamental frequency. On the other hand, the N2 amplitudes were significantly different between the 2 groups of conspecific calls, suggesting that the music frogs discriminated the species first, followed by individual identification, since N1 and N2 relate to selective attention and stimuli identification, respectively. Moreover, the P2 amplitudes evoked in females were significantly greater than those in males, indicating the existence of sexual dimorphism in auditory discrimination. In addition, both the N1 amplitudes in the left diencephalon and the P2 amplitudes in the left telencephalon were greater than in other brain areas, suggesting left hemispheric dominance in auditory perception. Taken together, our results support the hypothesis that species discrimination and identification of individual characteristics are accomplished sequentially, and that auditory perception exhibits differences between sexes and in spatial dominance.  相似文献   

7.

Background

There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.

Methodology/Principal Findings

This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.

Conclusions/Significance

These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.  相似文献   

8.
The nature of the neural codes for pitch and loudness, two basic auditory attributes, has been a key question in neuroscience for over century. A currently widespread view is that sound intensity (subjectively, loudness) is encoded in spike rates, whereas sound frequency (subjectively, pitch) is encoded in precise spike timing. Here, using information-theoretic analyses, we show that the spike rates of a population of virtual neural units with frequency-tuning and spike-count correlation characteristics similar to those measured in the primary auditory cortex of primates, contain sufficient statistical information to account for the smallest frequency-discrimination thresholds measured in human listeners. The same population, and the same spike-rate code, can also account for the intensity-discrimination thresholds of humans. These results demonstrate the viability of a unified rate-based cortical population code for both sound frequency (pitch) and sound intensity (loudness), and thus suggest a resolution to a long-standing puzzle in auditory neuroscience.  相似文献   

9.
Performing actions with sensory consequences modifies physiological and behavioral responses relative to otherwise identical sensory input perceived in a passive manner. It is assumed that such modifications occur through an efference copy sent from motor cortex to sensory regions during performance of voluntary actions. In the auditory domain most behavioral studies report attenuated perceived loudness of self-generated auditory action-consequences. However, several recent behavioral and physiological studies report enhanced responses to such consequences. Here we manipulated the intensity of self-generated and externally-generated sounds and examined the type of perceptual modification (enhancement vs. attenuation) reported by healthy human subjects. We found that when the intensity of self-generated sounds was low, perceived loudness is enhanced. Conversely, when the intensity of self-generated sounds was high, perceived loudness is attenuated. These results might reconcile some of the apparent discrepancies in the reported literature and suggest that efference copies can adapt perception according to the differential sensory context of voluntary actions.  相似文献   

10.
Auditory middle latency and steady-state responses (MLR/SSRs) were recorded in normal infants (aged 3 weeks to 28 months) and adults. SSR amplitudes were maximum using stimulus presentation rates near 40 Hz in adults. By contrast, the infant data showed no consistent amplitude maximum across the rates tested (9–59 Hz). With the exception of the brain-stem response wave V to MLR Na deflection, MLR components in infant's responses to 10.85 Hz clicks did not show any consistent pattern. To investigate the hypothesis that the 40 Hz SSR is derived from overlapping of the 10 Hz MLR components, 43.4 Hz SSRs were synthesized from the responses recorded at 10.85 Hz and compared with those recorded at 43.4 Hz. The predictive accuracy of the synthesized 43.4 Hz SSRs was significantly better in adults than in infants. The results of these studies indicate the presence of large age-related differences in the auditory MLR and SSR, and in the relationship between the two responses.  相似文献   

11.
In urethan-anesthetized cats, frequency domain analysis was used to explore the mechanisms of differential responses of inferior cardiac (CN), vertebral (VN), and renal (RN) sympathetic nerves to electrical stimulation of a discrete region of the medullary raphe (0-2 mm caudal to the obex). Raphe stimulation in baroreceptor-denervated cats at frequencies (7-12 Hz) that entrained the 10-Hz rhythm in nerve activity decreased CN and RN activities but increased VN activity. The reductions in CN and RN discharges were associated with decreased low-frequency (相似文献   

12.
Drifting gratings can modulate the activity of visual neurons at the temporal frequency of the stimulus. In order to characterize the temporal frequency modulation in the cat’s ascending tectofugal visual system, we recorded the activity of single neurons in the superior colliculus, the suprageniculate nucleus, and the anterior ectosylvian cortex during visual stimulation with drifting sine-wave gratings. In response to such stimuli, neurons in each structure showed an increase in firing rate and/or oscillatory modulated firing at the temporal frequency of the stimulus (phase sensitivity). To obtain a more complete characterization of the neural responses in spatiotemporal frequency domain, we analyzed the mean firing rate and the strength of the oscillatory modulations measured by the standardized Fourier component of the response at the temporal frequency of the stimulus. We show that the spatiotemporal stimulus parameters that elicit maximal oscillations often differ from those that elicit a maximal discharge rate. Furthermore, the temporal modulation and discharge-rate spectral receptive fields often do not overlap, suggesting that the detection range for visual stimuli provided jointly by modulated and unmodulated response components is larger than the range provided by a one response component.  相似文献   

13.
Chronic tinnitus, the continuous perception of a phantom sound, is a highly prevalent audiological symptom. A promising approach for the treatment of tinnitus is repetitive transcranial magnetic stimulation (rTMS) as this directly affects tinnitus-related brain activity. Several studies indeed show tinnitus relief after rTMS, however effects are moderate and vary strongly across patients. This may be due to a lack of knowledge regarding how rTMS affects oscillatory activity in tinnitus sufferers and which modulations are associated with tinnitus relief. In the present study we examined the effects of five different stimulation protocols (including sham) by measuring tinnitus loudness and tinnitus-related brain activity with Magnetoencephalography before and after rTMS. Changes in oscillatory activity were analysed for the stimulated auditory cortex as well as for the entire brain regarding certain frequency bands of interest (delta, theta, alpha, gamma). In line with the literature the effects of rTMS on tinnitus loudness varied strongly across patients. This variability was also reflected in the rTMS effects on oscillatory activity. Importantly, strong reductions in tinnitus loudness were associated with increases in alpha power in the stimulated auditory cortex, while an unspecific decrease in gamma and alpha power, particularly in left frontal regions, was linked to an increase in tinnitus loudness. The identification of alpha power increase as main correlate for tinnitus reduction sheds further light on the pathophysiology of tinnitus. This will hopefully stimulate the development of more effective therapy approaches.  相似文献   

14.
Summary Motor neurons innervating the dorsal longitudinal muscles of a noctuid moth receive synaptic input activated by auditory stimuli. Each ear of a noctuid moth contains two auditory neurons that are sensitive to ultrasound (Fig. 1). The ears function as bat detectors. Five pairs of large motor neurons and three pairs of small motor neurons found in the pterothoracic ganglia innervate the dorsal longitudinal (depressor) muscles of the mesothorax (Figs. 2 to 5). In non-flying preparations the motor neurons receive no oscillatory synaptic input. Synaptic input to a cell resulting from ultrasonic stimulation is consistent and can be either depolarizing or hyperpolarizing (Figs. 6 to 9). Quiescent neurons only rarely fire a spike in response to auditory inputs. Motor neurons in flying preparations receive oscillatory synaptic drive from the flight pattern generator and usually fire a spike for each wingbeat cycle (Figs. 10 to 12). Ultrasonic stimulation can provide augmented synaptic drive causing a neuron to fire two spikes per wingbeat cycle thus increasing flight vigor (Fig. 11). The same stimulus presented on another occasion can also inhibit spiking in the same motor neuron, but the rhythmic drive remains (Fig. 12). Thus, when the flight oscillator is running auditory stimuli can modulate neuronal responses in different ways depending on some unknown state of the nervous system. Sound intensity is the only stimulus parameter essential for activating the auditory pathway to these motor neurons. The intensity must be sufficient to excite two or three auditory neurons. The significance of these responses in relation to avoidance behavior to bats is discussed.  相似文献   

15.
An important goal of research on the cognitive neuroscience of decision-making is to produce a comprehensive model of behavior that flows from perception to action with all of the intermediate steps defined. To understand the mechanisms of perceptual decision-making for an auditory discrimination experiment, we connected a large-scale, neurobiologically realistic auditory pattern recognition model to a three-layer decision-making model and simulated an auditory delayed match-to-sample (DMS) task. In each trial of our simulated DMS task, pairs of stimuli were compared each stimulus being a sequence of three frequency-modulated tonal-contour segments, and a "match" or "nonmatch" button was pressed. The model's simulated response times and the different patterns of neural responses (transient, sustained, increasing) are consistent with experimental data and the simulated neurophysiological activity provides insights into the neural interactions from perception to action in the auditory DMS task.  相似文献   

16.
The phase reset hypothesis states that the phase of an ongoing neural oscillation, reflecting periodic fluctuations in neural activity between states of high and low excitability, can be shifted by the occurrence of a sensory stimulus so that the phase value become highly constant across trials (Schroeder et al., 2008). From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multi sensory processing (Senkowski et al. 2008). We follow up on a study in which evidence of phase reset was found using a purely behavioral paradigm by including also EEG measures. In this paradigm, presentation of an auditory accessory stimulus was followed by a visual target with a stimulus-onset asynchrony (SOA) across a range from 0 to 404 ms in steps of 4 ms. This fine-grained stimulus presentation allowed us to do a spectral analysis on the mean SRT as a function of the SOA, which revealed distinct peak spectral components within a frequency range of 6 to 11 Hz with a modus of 7 Hz. The EEG analysis showed that the auditory stimulus caused a phase reset in 7-Hz brain oscillations in a widespread set of channels. Moreover, there was a significant difference in the average phase at which the visual target stimulus appeared between slow and fast SRT trials. This effect was evident in three different analyses, and occurred primarily in frontal and central electrodes.  相似文献   

17.
In natural audio-visual environments, a change in depth is usually correlated with a change in loudness. In the present study, we investigated whether correlating changes in disparity and loudness would provide a functional advantage in binding disparity and sound amplitude in a visual search paradigm. To test this hypothesis, we used a method similar to that used by van der Burg et al. to show that non-spatial transient (square-wave) modulations of loudness can drastically improve spatial visual search for a correlated luminance modulation. We used dynamic random-dot stereogram displays to produce pure disparity modulations. Target and distractors were small disparity-defined squares (either 6 or 10 in total). Each square moved back and forth in depth in front of the background plane at different phases. The target's depth modulation was synchronized with an amplitude-modulated auditory tone. Visual and auditory modulations were always congruent (both sine-wave or square-wave). In a speeded search task, five observers were asked to identify the target as quickly as possible. Results show a significant improvement in visual search times in the square-wave condition compared to the sine condition, suggesting that transient auditory information can efficiently drive visual search in the disparity domain. In a second experiment, participants performed the same task in the absence of sound and showed a clear set-size effect in both modulation conditions. In a third experiment, we correlated the sound with a distractor instead of the target. This produced longer search times, indicating that the correlation is not easily ignored.  相似文献   

18.
Two different groups of normal college students were formed: One (the alpha group) received 10-Hz audiovisual (AV) stimulation for 8 minutes, and the other (beta) group received 22-Hz AV stimulation for 8 minutes. EEG power in the alpha (8-13 Hz) and beta (13-30 Hz) bands was FFT-extracted before, during, and for 24 minutes after stimulation. It was found that baseline (prestimulation) alpha and beta power predict the effects of stimulation, leading to individual differences in responsivity. High-baseline alpha participants showed either no entrainment or relatively prolonged entrainment with alpha stimulation. Low-baseline participants showed transient entrainment. Baseline alpha also predicted the direction of change in alpha with beta stimulation. Baseline beta and alpha predicted beta band response to beta stimulation, which was transient enhancement in some participants, inhibition in others. Some participants showed relatively prolonged beta enhancement with beta stimulation.  相似文献   

19.
This study illuminates processes underlying change detection for different features (detection of pitch versus loudness changes) and different amounts of attentional allocation (automatic versus attentive change detection). For this reason, the influence of important stimulus characteristics (intensity and inter-stimulus interval (ISI)) on these different types of change detection was determined. By varying intensity, it should be clarified whether these processes are mainly sensitive to the informational content of the change or to the total amount of stimulus energy. By varying ISI, it should be determined whether they are differentially sensitive to manipulations of encoding time and/or state of sensory refractoriness. Automatic change detection was indexed by the mismatch negativity (MMN), which is a component of the event-related brain potential (ERP). Attentive change detection was indexed by the N2b and P3 components of the ERP and by behavioral performance. Human subjects were presented with a high-probability standard tone and a low-probability deviant-tone, which differed from the standard tone in frequency (Experiment I) or intensity (Experiment II). In separate blocks, the intensities of the standard stimuli were of 55 and 70 dB SPL and ISIs were of 350 and 950 ms. During the first part of the experiments, subjects were engaged in silent reading, whereas they tried to discriminate deviants from standards in the second part. The MMN elicited by a frequency change was invariant to variations in intensity and ISI, whereas the MMN elicited by an intensity change was significantly modulated by both intensity and ISI. This implies functional differences between the neural traces underlying the frequency-MMN and the intensity-MMN. In addition, there were larger effects of the ISI on the N2b and P3 amplitudes as compared with the effects on the MMN amplitudes, suggesting stronger capacity limitations for attentive change detection than for automatic change detection.  相似文献   

20.
Transient and steady-state auditory evoked fields (AEFs) to brief tone pips were recorded over the left hemisphere at 7 different stimulus rates (0.125–39 Hz) using a 37-channel biomagnetometer. Previous observations of transient auditory gamma band response (GBR) activity were replicated. Similar rate characteristics and equivalent dipole locations supported the suggestion that the steady-state response (SSR) at about 40 Hz represents the summation of successive overlapping (10 Hz) middle latency responses (MLRs). On the other hand, differences in equivalent dipole locations and habituation effects suggest that the magnetically recorded GBR is a separate phenomenon which occurs primarily at low stimulus rates and is unrelated to either the magnetically recorded MRL or SSR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号