首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex.  相似文献   

2.
Current knowledge of sensory processing in the mammalian auditory system is mainly derived from electrophysiological studies in a variety of animal models, including monkeys, ferrets, bats, rodents, and cats. In order to draw suitable parallels between human and animal models of auditory function, it is important to establish a bridge between human functional imaging studies and animal electrophysiological studies. Functional magnetic resonance imaging (fMRI) is an established, minimally invasive method of measuring broad patterns of hemodynamic activity across different regions of the cerebral cortex. This technique is widely used to probe sensory function in the human brain, is a useful tool in linking studies of auditory processing in both humans and animals and has been successfully used to investigate auditory function in monkeys and rodents. The following protocol describes an experimental procedure for investigating auditory function in anesthetized adult cats by measuring stimulus-evoked hemodynamic changes in auditory cortex using fMRI. This method facilitates comparison of the hemodynamic responses across different models of auditory function thus leading to a better understanding of species-independent features of the mammalian auditory cortex.  相似文献   

3.
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing1. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function 2. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions 3. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated 4. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity.Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS 5-7. However, this online combination has many technical problems, including the static artifacts resulting from the presence of the TMS coil in the scanner room, or the effects of TMS pulses on the process of MR image formation. But more importantly, the loud acoustic noise induced by TMS (increased compared with standard use because of the resonance of the scanner bore) and the increased TMS coil vibrations (caused by the strong mechanical forces due to the static magnetic field of the MR scanner) constitute a crucial problem when studying auditory processing. This is one reason why fMRI was carried out before and after TMS in the present study. Similar approaches have been used to target the motor cortex 8,9, premotor cortex 10, primary somatosensory cortex 11,12 and language-related areas 13, but so far no combined TMS-fMRI study has investigated the auditory cortex. The purpose of this article is to provide details concerning the protocol and considerations necessary to successfully combine these two neuroscientific tools to investigate auditory processing. Previously we showed that repetitive TMS (rTMS) at high and low frequencies (resp. 10 Hz and 1 Hz) applied over the auditory cortex modulated response time (RT) in a melody discrimination task 2. We also showed that RT modulation was correlated with functional connectivity in the auditory network assessed using fMRI: the higher the functional connectivity between left and right auditory cortices during task performance, the higher the facilitatory effect (i.e. decreased RT) observed with rTMS. However those findings were mainly correlational, as fMRI was performed before rTMS. Here, fMRI was carried out before and immediately after TMS to provide direct measures of the functional organization of the auditory cortex, and more specifically of the plastic reorganization of the auditory neural network occurring after the neural intervention provided by TMS. Combined fMRI and TMS applied over the auditory cortex should enable a better understanding of brain mechanisms of auditory processing, providing physiological information about functional effects of TMS. This knowledge could be useful for many cognitive neuroscience applications, as well as for optimizing therapeutic applications of TMS, particularly in auditory-related disorders.  相似文献   

4.
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing.  相似文献   

5.
To form a coherent percept of the environment, our brain combines information from different senses. Such multisensory integration occurs in higher association cortices; but supposedly, it also occurs in early sensory areas. Confirming the latter hypothesis, we unequivocally demonstrate supra-additive integration of touch and sound stimulation at the second stage of the auditory cortex. Using high-resolution fMRI of the macaque monkey, we quantified the integration of auditory broad-band noise and tactile stimulation of hand and foot in anaesthetized animals. Integration was found posterior to and along the lateral side of the primary auditory cortex in the caudal auditory belt. Integration was stronger for temporally coincident stimuli and obeyed the principle of inverse effectiveness: greater enhancement for less effective stimuli. These findings demonstrates that multisensory integration occurs early and close to primary sensory areas and--because it occurs in anaesthetized animals--suggests that this integration is mediated by preattentive bottom-up mechanisms.  相似文献   

6.
7.
Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more important for perceiving vertical shifts. In humans, functional imaging studies have shown that sound movement in the horizontal plane activates brain areas distinct from the primary auditory cortex, in parietal and frontal lobes and in the planum temporale. However, no previous work has examined activations for vertical sound movement. It is therefore difficult to generalize previous imaging studies, based on horizontal movement only, to multidimensional auditory space perception. Using externalized virtual-space sounds in a functional magnetic resonance imaging (fMRI) paradigm to investigate this, we compared vertical and horizontal shifts in sound location. A common bilateral network of brain areas was activated in response to both horizontal and vertical sound movement. This included the planum temporale, superior parietal cortex, and premotor cortex. Sounds perceived laterally in virtual space were associated with contralateral activation of the auditory cortex. These results demonstrate that sound movement in vertical and horizontal dimensions engages a common processing network in the human cerebral cortex and show that multidimensional spatial properties of sounds are processed at this level.  相似文献   

8.
Visual area V4 is a midtier cortical area in the ventral visual pathway. It is crucial for visual object recognition and has been a focus of many studies on visual attention. However, there is no unifying view of V4's role in visual processing. Neither is there an understanding of how its role in feature processing interfaces with its role in visual attention. This review captures our current knowledge of V4, largely derived from electrophysiological and imaging studies in the macaque monkey. Based on recent discovery of functionally specific domains in V4, we propose that the unifying function of V4 circuitry is to enable selective extraction of specific functional domain-based networks, whether it be by bottom-up specification of object features or by top-down attentionally driven selection.  相似文献   

9.
Attention and motor preparation are two intimately linked processes. However, they can be dissociated in the laboratory in order to study their neuronal basis. Behavioral neurophysiology has thus shown that neurons that discharge in relation with attention or with motor preparation (or intention) exist in a variety of brain regions in the monkey, especially the prefrontal and premotor cortices. When examined more carefully, these two regions appear different in both the proportion of cells that respond during attention versus intention, and in the information coded in the so-called "preparatory activity". This activity reflects sensory selection in the prefrontal cortex (spatial attention/memory), motor selection in the premotor cortex. Furthermore, two regions in the dorsal aspect of premotor cortex can be distinguished on the basis of their relative involvement in attention: a rostral (anterior) region, functionally close to prefrontal cortex, and a caudal one, which appears functionally close to motor cortex. Using an experimental design derived from monkey experiments, a functional magnetic resonance imaging (fMRI) study recently indicated that the functional specialization within the premotor cortex is similar in monkey and man.  相似文献   

10.
Kajikawa Y  Schroeder CE 《Neuron》2011,72(5):847-858
Local field potentials (LFPs) are of growing importance in neurophysiological investigations. LFPs supplement action potential recordings by indexing activity relevant to EEG, magnetoencephalographic, and hemodynamic (fMRI) signals. Recent reports suggest that LFPs reflect activity within very small domains of several hundred micrometers. We examined this conclusion by comparing LFP, current source density (CSD), and multiunit activity (MUA) signals in macaque auditory cortex. Estimated by frequency tuning bandwidths, these signals' "listening areas" differ systematically with an order of MUA?< CSD?< LFP. Computational analyses confirm that observed LFPs receive local contributions. Direct measurements indicate passive spread of LFPs to sites more than a centimeter from their origins. These findings appear to be independent of the frequency content of the LFP. Our results challenge the idea that LFP recordings typically integrate over extremely circumscribed local domains. Rather, LFPs appear as a mixture of local potentials with "volume conducted" potentials from distant sites.  相似文献   

11.
In this article, we review a combined experimental-neuromodeling framework for understanding brain function with a specific application to auditory object processing. Within this framework, a model is constructed using the best available experimental data and is used to make predictions. The predictions are verified by conducting specific or directed experiments and the resulting data are matched with the simulated data. The model is refined or tested on new data and generates new predictions. The predictions in turn lead to better-focused experiments. The auditory object processing model was constructed using available neurophysiological and neuroanatomical data from mammalian studies of auditory object processing in the cortex. Auditory objects are brief sounds such as syllables, words, melodic fragments, etc. The model can simultaneously simulate neuronal activity at a columnar level and neuroimaging activity at a systems level while processing frequency-modulated tones in a delayed-match-to-sample task. The simulated neuroimaging activity was quantitatively matched with neuroimaging data obtained from experiments; both the simulations and the experiments used similar tasks, sounds, and other experimental parameters. We then used the model to investigate the neural bases of the auditory continuity illusion, a type of perceptual grouping phenomenon, without changing any of its parameters. Perceptual grouping enables the auditory system to integrate brief, disparate sounds into cohesive perceptual units. The neural mechanisms underlying auditory continuity illusion have not been studied extensively with conventional neuroimaging or electrophysiological techniques. Our modeling results agree with behavioral studies in humans and an electrophysiological study in cats. The results predict a particular set of bottom-up cortical processing mechanisms that implement perceptual grouping, and also attest to the robustness of our model.  相似文献   

12.
In monkeys, posterior parietal and premotor cortex play an important integrative role in polymodal motion processing. In contrast, our understanding of the convergence of senses in humans is only at its beginning. To test for equivalencies between macaque and human polymodal motion processing, we used functional MRI in normals while presenting moving visual, tactile, or auditory stimuli. Increased neural activity evoked by all three stimulus modalities was found in the depth of the intraparietal sulcus (IPS), ventral premotor, and lateral inferior postcentral cortex. The observed activations strongly suggest that polymodal motion processing in humans and monkeys is supported by equivalent areas. The activations in the depth of IPS imply that this area constitutes the human equivalent of macaque area VIP.  相似文献   

13.
In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey.  相似文献   

14.
The dopaminergic neurotransmitter system is critically involved in promoting plasticity in auditory cortex. We combined functional magnetic resonance imaging (fMRI) and a pharmacological manipulation to investigate dopaminergic modulation of neural activity in auditory cortex during instrumental learning. Volunteers either received 100 mg L-dopa (Madopar) or placebo in an appetitive, differential instrumental conditioning paradigm, which involved learning that a specific category of frequency modulated tones predicts a monetary reward when fast responses were made in a subsequent reaction time task. The other category of frequency modulated tones was not related to a reward. Our behavioral data provides evidence that dopaminergic stimulation differentially impacts on the speed of instrumental responding in rewarded and unrewarded trials. L-dopa increased neural BOLD activity in left auditory cortex to tones in rewarded and unrewarded trials. This increase was related to plasma L-dopa levels and learning rate. Our data thus provides evidence for dopaminergic modulation of neural activity in auditory cortex, which occurs for both auditory stimuli related to a later reward and those not related to a reward.  相似文献   

15.
Zimmer U  Macaluso E 《Neuron》2005,47(6):893-905
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization.  相似文献   

16.
Communication signals are important for social interactions and survival and are thought to receive specialized processing in the visual and auditory systems. Whereas the neural processing of faces by face clusters and face cells has been repeatedly studied [1-5], less is known about the neural representation of voice content. Recent functional magnetic resonance imaging (fMRI) studies have localized voice-preferring regions in the primate temporal lobe [6, 7], but the hemodynamic response cannot directly assess neurophysiological properties. We investigated the responses of neurons in an fMRI-identified voice cluster in awake monkeys, and here we provide the first systematic evidence for voice cells. "Voice cells" were identified, in analogy to "face cells," as neurons responding at least 2-fold stronger to conspecific voices than to "nonvoice" sounds or heterospecific voices. Importantly, whereas face clusters are thought to contain high proportions of face cells [4] responding broadly to many faces [1, 2, 4, 5, 8-10], we found that voice clusters contain moderate proportions of voice cells. Furthermore, individual voice cells exhibit high stimulus selectivity. The results reveal the neurophysiological bases for fMRI-defined voice clusters in the primate brain and highlight potential differences in how the auditory and?visual systems generate selective representations of communication signals.  相似文献   

17.
Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.  相似文献   

18.
Why is it hard to divide attention between dissimilar activities, such as reading and listening to a conversation? We used functional magnetic resonance imaging (fMRI) to study interference between simple auditory and visual decisions, independently of motor competition. Overlapping activity for auditory and visual tasks performed in isolation was found in lateral prefrontal regions, middle temporal cortex and parietal cortex. When the visual stimulus occurred during the processing of the tone, its activation in prefrontal and middle temporal cortex was suppressed. Additionally, reduced activity was seen in modality-specific visual cortex. These results paralleled impaired awareness of the visual event. Even without competing motor responses, a simple auditory decision interferes with visual processing on different neural levels, including prefrontal cortex, middle temporal cortex and visual regions.  相似文献   

19.
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.  相似文献   

20.
The spatiotemporal characteristics of neural activity in the guinea pig auditory cortex are investigated to determine their importance in neural processing and coding of the complex sounds. A multi-channel optical recording system has been developed for observing the cortical field of the mammalian brain in vivo. Using the voltage-sensitive dye: RH795, optical imaging was used to visualize neural activity in the guinea pig auditory cortex. Experimental results reveal a boomerang-shaped pattern of movement of activated neural cell regions for the evoked response to click as complex sounds. Parallel and sequential neural processing structure was observed. Although the exact frequency selectivities of single cells and tonotopical organization observed using microelectrode were not visible, the similar feature to the microelectrode evidences was imaged by extracting the strongly response field from the optical data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号