首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 600 毫秒
1.
Lu HD  Chen G  Tanigawa H  Roe AW 《Neuron》2010,68(5):1002-1013
In mammals, the perception of motion starts with direction-selective neurons in the visual cortex. Despite numerous studies in monkey primary and second visual cortex (V1 and V2), there has been no evidence of direction maps in these areas. In the present study, we used optical imaging methods to study the organization of motion response in macaque V1 and V2. In contrast to the findings in other mammals (e.g., cats and ferrets), we found no direction maps in macaque V1. Robust direction maps, however, were found in V2 thick/pale stripes and avoided thin stripes. In many cases direction maps were located within thick stripes and exhibited pinwheel or linear organizations. The presence of motion maps in V2 points to a newfound prominence of V2 in motion processing, for contributing to motion perception in the dorsal pathway and/or for motion cue-dependent form perception in the ventral pathway.  相似文献   

2.
Zhaoping L  Zhe L 《PloS one》2012,7(6):e36223
From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.  相似文献   

3.
Kayser C  Remedios R 《Neuron》2012,73(4):627-629
In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior.  相似文献   

4.
An event in one sensory modality can phase reset brain oscillations concerning another modality. In principle, this may result in stimulus-locked periodicity in behavioral performance. Here we considered this possible cross-modal impact of a sound for one of the best-characterized rhythms arising from the visual system, namely occipital alpha-oscillations (8-14 Hz). We presented brief sounds and concurrently recorded electroencephalography (EEG) and/or probed visual cortex excitability (phosphene perception) through occipital transcranial magnetic stimulation (TMS). In a first, TMS-only experiment, phosphene perception rate against time postsound showed a periodic pattern cycling at ~10 Hz phase-aligned to the sound. In a second, combined TMS-EEG experiment, TMS-trials reproduced the cyclical phosphene pattern and revealed a ~10 Hz pattern also for EEG-derived measures of occipital cortex reactivity to the TMS pulses. Crucially, EEG-data from intermingled trials without TMS established cross-modal phase-locking of occipitoparietal alpha oscillations. These independently recorded variables, i.e., occipital cortex excitability and reactivity and EEG phase dynamics, were significantly correlated. This shows that cross-modal phase locking of oscillatory visual cortex activity can arise in the human brain to affect perceptual and EEG measures of visual processing in a cyclical manner, consistent with occipital alpha oscillations underlying a rapid cycling of neural excitability in visual areas.  相似文献   

5.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.  相似文献   

6.
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.  相似文献   

7.
Previous studies have demonstrated that the early retinotopic cortex (ERC, i.e., V1/V2/V3) is highly associated with the lateral occipital complex (LOC) during visual perception. However, it remains largely unclear how to evaluate their associations in quantitative way. The present study tried to apply a multivariate pattern analysis (MVPA) to quantify the neural activity in ERC and its association with that of the LOC when participants saw visual images. To this end, we assessed whether low-level visual features (Gabor features) could predict the neural activity in the ERC and LOC according to a voxel-based encoding model (VBEM), and then quantified the association of the neural activity between these regions by using an analogical VBEM. We found that the Gabor features remarkably predicted the activity of the ERC (e.g., the predicted accuracy was 52.5% for a participant) instead of that of the LOC (4.2%). Moreover, the MVPA approach can also be used to establish corresponding relationships between the activity patterns in the LOC and those in the ERC (64.2%). In particular, we found that the integration of the Gabor features and LOC visual information could dramatically improve the ‘prediction’ of ERC activity (88.3%). Overall, the present study provides new evidences for the possibility of quantifying the association of the neural activity between the regions of ERC and LOC. This approach will help to provide further insights into the neural substrates of the visual processing.  相似文献   

8.
It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.  相似文献   

9.
Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.  相似文献   

10.
Functional magnetic resonance imaging (fMRI) was used to investigate activation of the multimodal areas in the cerebral cortex–supramarginal and angular gyri, precuneus, and middle temporal visual cortex (MT/V5)–in response to motion of biologically significant sounds (human footsteps). The subjects listened to approaching or receding footstep sounds during 45 s, and such stimulation was supposed to evoke auditory adaptation to biological motion. Listening conditions alternated with stimulation-free control. To reveal activity in the regions of interest, the periods before and during stimulation were compared. Most stable and voluminous activation was detected in the supramarginal and angular gyri, being registered for all footstep sound types–approaching, receding and steps in place. Listening to human approaching steps activated the precuneus area, with the volume of activation clusters varying considerably between subjects. In the MT/V5 area, activation was revealed in 5 of 21 subjects. The involvement of the tested multimodal cortical areas in analyzing biological motion is discussed.  相似文献   

11.
The human visual system has a remarkable ability to successfully operate under a variety of challenging viewing conditions. For example, our object-recognition capabilities are largely unaffected by low-contrast (e.g., foggy) environments. The basis for this ability appears to be reflected in the neural responses in higher cortical visual areas that have been characterized as being invariant to changes in luminance contrast: neurons in these areas respond nearly equally to low-contrast as compared to high-contrast stimuli. This response pattern is fundamentally different than that observed in earlier visual areas such as primary visual cortex (V1), which is highly dependent on contrast. How this invariance is achieved in higher visual areas is largely unknown. We hypothesized that directed spatial attention is an important prerequisite of the contrast-invariant responses in higher visual areas and tested this with functional MRI (fMRI) while subjects directed their attention either toward or away from contrast-varying shape stimuli. We found that in the lateral occipital complex (LOC), a visual area important for processing shape information, attention changes the form of the contrast response function (CRF). By directing attention away from the shape stimuli, the CRF in the LOC was similar to that measured in V1. We describe a number of mechanisms that could account for this important function of attention.  相似文献   

12.
Wardak C  Olivier E  Duhamel JR 《Neuron》2004,42(3):501-508
Although the parietal cortex has been repeatedly implicated in controlling attention, the nature and importance of this contribution remain unclear. Here we show that inactivating the lateral intraparietal area in monkeys delays the detection of a visual target located in the contralateral visual field. This effect was observed using different visual scene configurations, e.g., with distractors that differ in number or that differ from the target by a conjunction of shape and color or by a single feature. Since eye movements were not allowed during the searching tasks, these results argue for an unambiguous role of the parietal cortex in the top-down control of attentional deployment in space.  相似文献   

13.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

14.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

15.
Bugmann G 《Bio Systems》2007,89(1-3):154-159
What fraction of the inputs to a neuron in the primary visual cortex (V1) need to be active for that neuron to reach its firing threshold? The paper describes a numerical method for estimating the selectivity of visual neurons, in terms of the required fraction of active excitatory inputs, from standard data produced by intracellular electro-physiological recordings. The method also provides an estimate of the relative strength of the feedforward inhibition in a push-pull model of the inputs to V1 simple cells. The method is tested on two V1 cells described in Carandini and Ferster [Carandini, M., Ferster, D., 2000. Membrane potential and firing rate in cat primary visual cortex. J. Neurosci. 20, 470-484]. The results indicate that the maximum strength of feedforward inhibition is around 30% of the maximum strength of feedforward excitation. The two V1 neurons investigated fire if more than around 40% of their excitatory LGN inputs are active.  相似文献   

16.
Spontaneous network activity constitutes a central theme during the development of neuronal circuitry [1, 2]. Before the onset of vision, retinal neurons generate waves of spontaneous activity that are relayed along the ascending visual pathway [3, 4] and shape activity patterns in these regions [5, 6]. The spatiotemporal nature of retinal waves is required to establish precise functional maps in higher visual areas, and their disruption results in enlarged axonal projection areas (e.g., [7-10]). However, how retinal inputs shape network dynamics in the visual cortex on the cellular level is unknown. Using in vivo two-photon calcium imaging, we identified two independently occurring patterns of network activity in the mouse primary visual cortex (V1) before and at the onset of vision. Acute manipulations of spontaneous retinal activity revealed that one type of network activity largely originated in the retina and was characterized by low synchronicity (L-) events. In addition, we identified a type of high synchronicity (H-) events that required gap junction signaling but were independent of retinal input. Moreover, the patterns differed in wave progression and developmental profile. Our data suggest that different activity patterns have complementary functions during the formation of synaptic circuits in the developing visual cortex.  相似文献   

17.
Previous studies of the motor cortex in behaving animals were focused on the relations between the activity of single cells, usually pyramidal tract neurons, and parameters of isometric contraction (e.g., intensity of force) or parameters of movement along one axis (e.g., flexion-extension) of a single joint (e.g., elbow or wrist). However, the commonly meaningful behavioral parameter is the trajectory of the hand in extrapersonal space, which is realized by simultaneous motions about two or three joints (e.g., elbow, shoulder, wrist) and concurrent engagement of several muscles. The spatial parameters of a straight trajectory are its direction and extent. We hypothesized that a major function of the motor cortex, among other possible roles, is the specification and control of the direction of the movement trajectory in space. This reference of motor cortical function to the control of spatial aspects of the trajectory differentiated our approach from the other approaches outlined above. We investigated the directional selectivity cells in the arm area of the motor cortex by recording their activity while monkeys moved their hands in various directions in space towards visual targets. There were two salient findings of these studies. First, the intensity of the discharge of single cells varies in an orderly fashion with the direction of movement in space, so that the discharge rate is highest with movements in a preferred direction, and decreases progressively with movements made in directions more and more away from the preferred one. Thus single cells are broadly tuned around a preferred direction which differs among different cells.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

18.
Classical receptive fields (cRF) increase in size from the retina to higher visual centers. The present work shows how temporal properties, in particular lateral spike velocity and spike input correlation, can affect cRF size and position without visual experience. We demonstrate how these properties are related to the spatial range of cortical synchronization if Hebbian learning dominates early development. For this, a largely reduced model of two successive levels of the visual cortex is developed (e.g., areas V1 and V2). It consists of retinotopic networks of spiking neurons with constant spike velocity in lateral connections. Feedforward connections between level 1 and 2 are additive and determine cRF size and shape, while lateral connections within level 1 are modulatory and affect the cortical range of synchronization. Input during development is mimicked by spike trains with spatially homogeneous properties and a confined temporal correlation width. During learning, the homogeneous lateral coupling shrinks to limited coupling structures defining synchronization and related association fields (AF). The size of level-1 synchronization fields determines the lateral coupling range of developing level-1-to-2 connections and, thus, the size of level-2 cRFs, even if the feedforward connections have distance-independent delays. AFs and cRFs increase with spike velocity in the lateral network and temporal correlation width of the input. Our results suggest that AF size of V1 and cRF size of V2 neurons are confined during learning by the temporal width of input correlations and the spike velocity in lateral connections without the need of visual experience. During learning from visual experience, a similar influence of AF size on the cRF size may be operative at successive levels of processing, including other parts of the visual system.  相似文献   

19.
The investigation of distributed coding across multiple neurons in the cortex remains to this date a challenge. Our current understanding of collective encoding of information and the relevant timescales is still limited. Most results are restricted to disparate timescales, focused on either very fast, e.g., spike-synchrony, or slow timescales, e.g., firing rate. Here, we investigated systematically multineuronal activity patterns evolving on different timescales, spanning the whole range from spike-synchrony to mean firing rate. Using multi-electrode recordings from cat visual cortex, we show that cortical responses can be described as trajectories in a high-dimensional pattern space. Patterns evolve on a continuum of coexisting timescales that strongly relate to the temporal properties of stimuli. Timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (5-20 ms) play a particularly salient role in encoding a large amount of stimulus-related information. Thus, to faithfully encode the properties of visual stimuli the brain engages multiple neurons into activity patterns evolving on multiple timescales.  相似文献   

20.
Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号