首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Decoding of frequency-modulated (FM) sounds is essential for phoneme identification. This study investigates selectivity to FM direction in the human auditory system.

Methodology/Principal Findings

Magnetoencephalography was recorded in 10 adults during a two-tone adaptation paradigm with a 200-ms interstimulus-interval. Stimuli were pairs of either same or different frequency modulation direction. To control that FM repetition effects cannot be accounted for by their on- and offset properties, we additionally assessed responses to pairs of unmodulated tones with either same or different frequency composition. For the FM sweeps, N1m event-related magnetic field components were found at 103 and 130 ms after onset of the first (S1) and second stimulus (S2), respectively. This was followed by a sustained component starting at about 200 ms after S2. The sustained response was significantly stronger for stimulation with the same compared to different FM direction. This effect was not observed for the non-modulated control stimuli.

Conclusions/Significance

Low-level processing of FM sounds was characterized by repetition enhancement to stimulus pairs with same versus different FM directions. This effect was FM-specific; it did not occur for unmodulated tones. The present findings may reflect specific interactions between frequency separation and temporal distance in the processing of consecutive FM sweeps.  相似文献   

2.

Background

Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset.

Methodology

Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz), while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz), such that the restored fundamental (also knows as “virtual pitch”) changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component.

Principal Findings

We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch) counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies.

Conclusions

Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.  相似文献   

3.

Background

Fetal alcohol spectrum disorders (FASD) are the leading cause of mental retardation in the western world and children with FASD present altered somatosensory, auditory and visual processing. There is growing evidence that some of these sensory processing problems may be related to altered cortical maps caused by impaired developmental neuronal plasticity.

Methodology/Principal Findings

Here we show that the primary visual cortex of ferrets exposed to alcohol during the third trimester equivalent of human gestation have decreased CREB phosphorylation and poor orientation selectivity revealed by western blotting, optical imaging of intrinsic signals and single-unit extracellular recording techniques. Treating animals several days after the period of alcohol exposure with a phosphodiesterase type 1 inhibitor (Vinpocetine) increased CREB phosphorylation and restored orientation selectivity columns and neuronal orientation tuning.

Conclusions/Significance

These findings suggest that CREB function is important for the maturation of orientation selectivity and that plasticity enhancement by vinpocetine may play a role in the treatment of sensory problems in FASD.  相似文献   

4.

Background

Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth.

Methodology/Principal Findings

Here we show that auditory memories can last at least six weeks. Experimental fetuses were given precisely controlled exposure to a descending piano melody twice daily during the 35th, 36th, and 37th weeks of gestation. Six weeks later we assessed the cardiac responses of 25 exposed infants and 25 naive control infants, while in quiet sleep, to the descending melody and to an ascending control piano melody. The melodies had precisely inverse contours, but similar spectra, identical duration, tempo and rhythm, thus, almost identical amplitude envelopes. All infants displayed a significant heart rate change. In exposed infants, the descending melody evoked a cardiac deceleration that was twice larger than the decelerations elicited by the ascending melody and by both melodies in control infants.

Conclusions/Significance

Thus, 3-weeks of prenatal exposure to a specific melodic contour affects infants ‘auditory processing’ or perception, i.e., impacts the autonomic nervous system at least six weeks later, when infants are 1-month old. Our results extend the retention interval over which a prenatally acquired memory of a specific sound stream can be observed from 3–4 days to six weeks. The long-term memory for the descending melody is interpreted in terms of enduring neurophysiological tuning and its significance for the developmental psychobiology of attention and perception, including early speech perception, is discussed.  相似文献   

5.

Background

Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication.

Methodology/Principal Findings

We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata) pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s) matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area.

Conclusions/Significance

Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.  相似文献   

6.

Background

The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired.

Methods

Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing.

Conclusions

Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.  相似文献   

7.

Background

Radial intra- and interlaminar connections form a basic microcircuit in primary auditory cortex (AI) that extracts acoustic information and distributes it to cortical and subcortical networks. Though the structure of this microcircuit is known, we do not know how the functional connectivity between layers relates to laminar processing.

Methodology/Principal Findings

We studied the relationships between functional connectivity and receptive field properties in this columnar microcircuit by simultaneously recording from single neurons in cat AI in response to broadband dynamic moving ripple stimuli. We used spectrotemporal receptive fields (STRFs) to estimate the relationship between receptive field parameters and the functional connectivity between pairs of neurons. Interlaminar connectivity obtained through cross-covariance analysis reflected a consistent pattern of information flow from thalamic input layers to cortical output layers. Connection strength and STRF similarity were greatest for intralaminar neuron pairs and in supragranular layers and weaker for interlaminar projections. Interlaminar connection strength co-varied with several STRF parameters: feature selectivity, phase locking to the stimulus envelope, best temporal modulation frequency, and best spectral modulation frequency. Connectivity properties and receptive field relationships differed for vertical and horizontal connections.

Conclusions/Significance

Thus, the mode of local processing in supragranular layers differs from that in infragranular layers. Therefore, specific connectivity patterns in the auditory cortex shape the flow of information and constrain how spectrotemporal processing transformations progress in the canonical columnar auditory microcircuit.  相似文献   

8.

Background

Subjective duration is strongly influenced by repetition and novelty, such that an oddball stimulus in a stream of repeated stimuli appears to last longer in duration in comparison. We hypothesize that this duration illusion, called the temporal oddball effect, is a result of the difference in expectation between the oddball and the repeated stimuli. Specifically, we conjecture that the repeated stimuli contract in duration as a result of increased predictability; these duration contractions, we suggest, result from decreased neural response amplitude with repetition, known as repetition suppression.

Methodology/Principal Findings

Participants viewed trials consisting of lines presented at a particular orientation (standard stimuli) followed by a line presented at a different orientation (oddball stimulus). We found that the size of the oddball effect correlates with the number of repetitions of the standard stimulus as well as the amount of deviance from the oddball stimulus; both of these results are consistent with a repetition suppression hypothesis. Further, we find that the temporal oddball effect is sensitive to experimental context – that is, the size of the oddball effect for a particular experimental trial is influenced by the range of duration distortions seen in preceding trials.

Conclusions/Significance

Our data suggest that the repetition-related duration contractions causing the oddball effect are a result of neural repetition suppression. More generally, subjective duration may reflect the prediction error associated with a stimulus and, consequently, the efficiency of encoding that stimulus. Additionally, we emphasize that experimental context effects need to be taken into consideration when designing duration-related tasks.  相似文献   

9.

Objectives

Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern.

Experimental design

Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition.

Principal findings

The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca''s area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects.

Conclusions

Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.  相似文献   

10.

Background

The neuroplasticity hypothesis of major depressive disorder proposes that a dysfunction of synaptic plasticity represents a basic pathomechanism of the disorder. Animal models of depression indicate enhanced plasticity in a ventral emotional network, comprising the amygdala. Here, we investigated fear extinction learning as a non-invasive probe for amygdala-dependent synaptic plasticity in patients with major depressive disorder and healthy controls.

Methods

Differential fear conditioning was measured in 37 inpatients with severe unipolar depression (International Classification of Diseases, 10th revision, criteria) and 40 healthy controls. The eye-blink startle response, a subcortical output signal that is modulated by local synaptic plasticity in the amygdala in fear acquisition and extinction learning, was recorded as the primary outcome parameter.

Results

After robust and similar fear acquisition in both groups, patients with major depressive disorder showed significantly enhanced fear extinction learning in comparison to healthy controls, as indicated by startle responses to conditioned stimuli. The strength of extinction learning was positively correlated with the total illness duration.

Conclusions

The finding of enhanced fear extinction learning in major depressive disorder is consistent with the concept that the disorder is characterized by enhanced synaptic plasticity in the amygdala and the ventral emotional network. Clinically, the observation emphasizes the potential of successful extinction learning, the basis of exposure therapy, in anxiety-related disorders despite the frequent comorbidity of major depressive disorder.  相似文献   

11.

Background

The repeated presentation of stimuli typically attenuates neural responses (repetition suppression) or, less commonly, increases them (repetition enhancement) when stimuli are highly complex, degraded or presented under noisy conditions. In adult functional neuroimaging research, these repetition effects are considered as neural correlates of habituation. The development and respective functional significance of these effects in infancy remain largely unknown.

Objective

This study investigates repetition effects in newborns using functional near-infrared spectroscopy, and specifically the role of stimulus complexity in evoking a repetition enhancement vs. a repetition suppression response, following up on Gervain et al. (2008). In that study, abstract rule-learning was found at birth in cortical areas specific to speech processing, as evidenced by a left-lateralized repetition enhancement of the hemodynamic response to highly variable speech sequences conforming to a repetition-based ABB artificial grammar, but not to a random ABC grammar.

Methods

Here, the same paradigm was used to investigate how simpler stimuli (12 different sequences per condition as opposed to 140), and simpler presentation conditions (blocked rather than interleaved) would influence repetition effects at birth.

Results

Results revealed that the two grammars elicited different dynamics in the two hemispheres. In left fronto-temporal areas, we reproduce the early perceptual discrimination of the two grammars, with ABB giving rise to a greater response at the beginning of the experiment than ABC. In addition, the ABC grammar evoked a repetition enhancement effect over time, whereas a stable response was found for the ABB grammar. Right fronto-temporal areas showed neither initial discrimination, nor change over time to either pattern.

Conclusion

Taken together with Gervain et al. (2008), this is the first evidence that manipulating methodological factors influences the presence or absence of neural repetition enhancement effects in newborns and stimulus variability appears a particularly important factor. Further, this temporal modulation is restricted to the left hemisphere, confirming its specialization for learning linguistic regularities from birth.  相似文献   

12.

Background

Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants.

Methodology/Principal Findings

Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users.

Conclusion/Significance

Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.  相似文献   

13.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

14.

Background

The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.

Methodology/Principal Findings

Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.

Conclusions/Significance

This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event.  相似文献   

15.

Background

Paired associative stimulation (PAS) consisting of repeated application of transcranial magnetic stimulation (TMS) pulses and contingent exteroceptive stimuli has been shown to induce neuroplastic effects in the motor and somatosensory system. The objective was to investigate whether the auditory system can be modulated by PAS.

Methods

Acoustic stimuli (4 kHz) were paired with TMS of the auditory cortex with intervals of either 45 ms (PAS(45 ms)) or 10 ms (PAS(10 ms)). Two-hundred paired stimuli were applied at 0.1 Hz and effects were compared with low frequency repetitive TMS (rTMS) at 0.1 Hz (200 stimuli) and 1 Hz (1000 stimuli) in eleven healthy students. Auditory cortex excitability was measured before and after the interventions by long latency auditory evoked potentials (AEPs) for the tone (4 kHz) used in the pairing, and a control tone (1 kHz) in a within subjects design.

Results

Amplitudes of the N1-P2 complex were reduced for the 4 kHz tone after both PAS(45 ms) and PAS(10 ms), but not after the 0.1 Hz and 1 Hz rTMS protocols with more pronounced effects for PAS(45 ms). Similar, but less pronounced effects were observed for the 1 kHz control tone.

Conclusion

These findings indicate that paired associative stimulation may induce tonotopically specific and also tone unspecific human auditory cortex plasticity.  相似文献   

16.
Ni H  Huang L  Chen N  Zhang F  Liu D  Ge M  Guan S  Zhu Y  Wang JH 《PloS one》2010,5(10):e13736

Background

Loss of a sensory function is often followed by the hypersensitivity of other modalities in mammals, which secures them well-awareness to environmental changes. Cellular and molecular mechanisms underlying cross-modal sensory plasticity remain to be documented.

Methodology/Principal Findings

Multidisciplinary approaches, such as electrophysiology, behavioral task and immunohistochemistry, were used to examine the involvement of specific types of neurons in cross-modal plasticity. We have established a mouse model that olfactory deficit leads to a whisking upregulation, and studied how GABAergic neurons are involved in this cross-modal plasticity. In the meantime of inducing whisker tactile hypersensitivity, the olfactory injury recruits more GABAergic neurons and their fine processes in the barrel cortex, as well as upregulates their capacity of encoding action potentials. The hyperpolarization driven by inhibitory inputs strengthens the encoding ability of their target cells.

Conclusion/Significance

The upregulation of GABAergic neurons and the functional enhancement of neuronal networks may play an important role in cross-modal sensory plasticity. This finding provides the clues for developing therapeutic approaches to help sensory recovery and substitution.  相似文献   

17.
Liu X  Yan Y  Wang Y  Yan J 《PloS one》2010,5(11):e14038

Background

Cortical neurons implement a high frequency-specific modulation of subcortical nuclei that includes the cochlear nucleus. Anatomical studies show that corticofugal fibers terminating in the auditory thalamus and midbrain are mostly ipsilateral. Differently, corticofugal fibers terminating in the cochlear nucleus are bilateral, which fits to the needs of binaural hearing that improves hearing quality. This leads to our hypothesis that corticofugal modulation of initial neural processing of sound information from the contralateral and ipsilateral ears could be equivalent or coordinated at the first sound processing level.

Methodology/Principal Findings

With the focal electrical stimulation of the auditory cortex and single unit recording, this study examined corticofugal modulation of the ipsilateral cochlear nucleus. The same methods and procedures as described in our previous study of corticofugal modulation of contralateral cochlear nucleus were employed simply for comparison. We found that focal electrical stimulation of cortical neurons induced substantial changes in the response magnitude, response latency and receptive field of ipsilateral cochlear nucleus neurons. Cortical stimulation facilitated auditory response and shortened the response latency of physiologically matched neurons whereas it inhibited auditory response and lengthened the response latency of unmatched neurons. Finally, cortical stimulation shifted the best frequencies of cochlear neurons towards those of stimulated cortical neurons.

Conclusion

Our data suggest that cortical neurons enable a high frequency-specific remodelling of sound information processing in the ipsilateral cochlear nucleus in the same manner as that in the contralateral cochlear nucleus.  相似文献   

18.

Background

Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements.

Methodology/Principal Findings

We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound.

Conclusions/Significance

Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action.  相似文献   

19.

Background

Tinnitus is an auditory sensation characterized by the perception of sound or noise in the absence of any external sound source. Based on neurobiological research, it is generally accepted that most forms of tinnitus are attributable to maladaptive plasticity due to damage to auditory system. Changes have been observed in auditory structures such as the inferior colliculus, the thalamus and the auditory cortex as well as in non-auditory brain areas. However, the observed changes show great variability, hence lacking a conclusive picture. One of the reasons might be the selection of inhomogeneous groups in data analysis.

Methodology

The aim of the present study was to delineate the differences between the neural networks involved in narrow band noise and pure tone tinnitus conducting LORETA based source analysis of resting state EEG.

Conclusions

Results demonstrated that narrow band noise tinnitus patients differ from pure tone tinnitus patients in the lateral frontopolar (BA 10), PCC and the parahippocampal area for delta, beta and gamma frequency bands, respectively. The parahippocampal-PCC current density differences might be load dependent, as noise-like tinnitus constitutes multiple frequencies in contrast to pure tone tinnitus. The lateral frontopolar differences might be related to pitch specific memory retrieval.  相似文献   

20.

Objective

Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball.

Methods

Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude.

Results

Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy.

Conclusions

Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection.

Significance

Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号