首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Over three months of intensive training with a tactile stimulation device, 18 blind and 10 blindfolded seeing subjects improved in their ability to identify geometric figures by touch. Seven blind subjects spontaneously reported 'visual qualia', the subjective sensation of seeing flashes of light congruent with tactile stimuli. In the latter subjects tactile stimulation evoked activation of occipital cortex on electroencephalography (EEG). None of the blind subjects who failed to experience visual qualia, despite identical tactile stimulation training, showed EEG recruitment of occipital cortex. None of the blindfolded seeing humans reported visual-like sensations during tactile stimulation. These findings support the notion that the conscious experience of seeing is linked to the activation of occipital brain regions in people with blindness. Moreover, the findings indicate that provision of visual information can be achieved through non-visual sensory modalities which may help to minimize the disability of blind individuals, affording them some degree of object recognition and navigation aid.  相似文献   

2.
The study of blind individuals provides insight into the brain re-organization and behavioral compensations that occur following sensory deprivation. While behavioral studies have yielded conflicting results in terms of performance levels within the remaining senses, deafferentation of visual cortical areas through peripheral blindness results in clear neuroplastic changes. Most striking is the activation of occipital cortex in response to auditory and tactile stimulation. Indeed, parts of the "unimodal" visual cortex are recruited by other sensory modalities to process sensory information in a functionally relevant manner. In addition, a larger area of the sensorimotor cortex is devoted to the representation of the reading finger in blind Braille readers. The "visual" function of the deafferented occipital cortex is also altered, where transcranial magnetic stimulation-induced phosphenes can be elicited in only 20% of blind subjects. The neural mechanisms underlying these changes remain elusive but recent data showing rapid cross-modal plasticity in blindfolded, sighted subjects argue against the establishment of new connections to explain cross-modal interactions in the blind. Rather, latent pathways that participate in multisensory percepts in sighted subjects might be unmasked and may be potentiated in the event of complete loss of visual input. These issues have important implications for the development of visual prosthesis aimed at restoring some degree of vision in the blind.  相似文献   

3.
As most sensory modalities, the visual system needs to deal with very fast changes in the environment. Instead of processing all sensory stimuli, the brain is able to construct a perceptual experience by combining selected sensory input with an ongoing internal activity. Thus, the study of visual perception needs to be approached by examining not only the physical properties of stimuli, but also the brain's ongoing dynamical states onto which these perturbations are imposed. At least three different models account for this internal dynamics. One model is based on cardinal cells where the activity of few cells by itself constitutes the neuronal correlate of perception, while a second model is based on a population coding that states that the neuronal correlate of perception requires distributed activity throughout many areas of the brain. A third proposition, known as the temporal correlation hypothesis states that the distributed neuronal populations that correlate with perception, are also defined by synchronization of the activity on a millisecond time scale. This would serve to encode contextual information by defining relations between the features of visual objects. If temporal properties of neural activity are important to establish the neural mechanisms of perception, then the study of appropriate dynamical stimuli should be instrumental to determine how these systems operate. The use of natural stimuli and natural behaviors such as free viewing, which features fast changes of internal brain states as seen by motor markers, is proposed as a new experimental paradigm to study visual perception.  相似文献   

4.
Sensory working memory consists of the short-term storage of sensory stimuli to guide behaviour. There is increasing evidence that elemental sensory dimensions - such as object motion in the visual system or the frequency of a sound in the auditory system - are stored by segregated feature-selective systems that include not only the prefrontal and parietal cortex, but also areas of sensory cortex that carry out relatively early stages of processing. These circuits seem to have a dual function: precise sensory encoding and short-term storage of this information. New results provide insights into how activity in these circuits represents the remembered sensory stimuli.  相似文献   

5.
We studied the influence of weightlessness on bilateral symmetry detection during prolonged space flight. Supposing that weightlessness may affect visual information processing by the right and left hemispheres in different ways, we studied this phenomenon with regard for the part of the visual field where to a stimulus was presented (the sight fixation center or the left/right half of this field). We used two types of stimuli, i.e., closed figures (polygons) and distributed figures formed by dots. There was a distinct difference between the central and noncentral presentation of stimuli under terrestrial conditions. When a stimulus was presented noncentrally (on the left or right), a manifest dominance of the horizontal axis was observed. However, there was no substantial difference while stimulating the left and right parts of the visual field. This contradicts the hypothesis on hemispheric specialization of the brain in symmetry detection. When stimuli were presented eccentrically, weightlessness did not notably influence information processing. When they were presented centrally, the predominance of the vertical axis in closed figures tended to weaken under the impact of weightlessness. However, this predominance strengthened when multicomponent figures were presented in space. The different influences of weightlessness on perceiving symmetry of stimuli of different types shows that it may be detected at various levels with different degrees of using nonvisual sensory information.  相似文献   

6.
Our understanding of multisensory integration has advanced because of recent functional neuroimaging studies of three areas in human lateral occipito-temporal cortex: superior temporal sulcus, area LO and area MT (V5). Superior temporal sulcus is activated strongly in response to meaningful auditory and visual stimuli, but responses to tactile stimuli have not been well studied. Area LO shows strong activation in response to both visual and tactile shape information, but not to auditory representations of objects. Area MT, an important region for processing visual motion, also shows weak activation in response to tactile motion, and a signal that drops below resting baseline in response to auditory motion. Within superior temporal sulcus, a patchy organization of regions is activated in response to auditory, visual and multisensory stimuli. This organization appears similar to that observed in polysensory areas in macaque superior temporal sulcus, suggesting that it is an anatomical substrate for multisensory integration. A patchy organization might also be a neural mechanism for integrating disparate representations within individual sensory modalities, such as representations of visual form and visual motion.  相似文献   

7.
Sensory information from different modalities is processed in parallel, and then integrated in associative brain areas to improve object identification and the interpretation of sensory experiences. The Superior Colliculus (SC) is a midbrain structure that plays a critical role in integrating visual, auditory, and somatosensory input to assess saliency and promote action. Although the response properties of the individual SC neurons to visuoauditory stimuli have been characterized, little is known about the spatial and temporal dynamics of the integration at the population level. Here we recorded the response properties of SC neurons to spatially restricted visual and auditory stimuli using large-scale electrophysiology. We then created a general, population-level model that explains the spatial, temporal, and intensity requirements of stimuli needed for sensory integration. We found that the mouse SC contains topographically organized visual and auditory neurons that exhibit nonlinear multisensory integration. We show that nonlinear integration depends on properties of auditory but not visual stimuli. We also find that a heuristically derived nonlinear modulation function reveals conditions required for sensory integration that are consistent with previously proposed models of sensory integration such as spatial matching and the principle of inverse effectiveness.  相似文献   

8.
The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and ‘more abstract’ representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as ‘action’ the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network.  相似文献   

9.
The occipital cortex (OC) of early-blind humans is activated during various nonvisual perceptual and cognitive tasks, but little is known about its modular organization. Using functional MRI we tested whether processing of auditory versus tactile and spatial versus nonspatial information was dissociated in the OC of the early blind. No modality-specific OC activation was observed. However, the right middle occipital gyrus (MOG) showed a preference for spatial over nonspatial processing of both auditory and tactile stimuli. Furthermore, MOG activity was correlated with accuracy of individual sound localization performance. In sighted controls, most of extrastriate OC, including the MOG, was deactivated during auditory and tactile conditions, but the right MOG was more activated during spatial than nonspatial visual tasks. Thus, although the sensory modalities driving the neurons in the reorganized OC of blind individuals are altered, the functional specialization of extrastriate cortex is retained regardless of visual experience.  相似文献   

10.
Mismatch negativity of ERP in cross-modal attention   总被引:1,自引:0,他引:1  
Event-related potentials were measured in 12 healthy youth subjects aged 19-22 using the paradigm "cross-modal and delayed response" which is able to improve unattended purity and to avoid the effect of task target on the deviant components of ERP. The experiment included two conditions: (i) Attend visual modality, ignore auditory modality; (ii) attend auditory modality, ignore visual modality. The stimuli under the two conditions were the same. The difference wave was obtained by subtracting ERPs of the standard stimuli from that of the deviant stim-uli. The present results showed that mismatch negativity (MMN), N2b and P3 components can be produced in the auditory and visual modalities under attention condition. However, only MMN was observed in the two modalities un-der inattention condition. Auditory and visual MMN have some features in common: their largest MMN wave peaks were distributed respectively over their primary sensory projection areas of the scalp under attention condition, but over front  相似文献   

11.
Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.  相似文献   

12.
Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception whenever one perceptual interpretation is dominant, and the instability of perception that causes perceptual dominance to alternate between perceptual interpretations upon extended viewing. This review summarizes several ways in which contextual information can help the brain resolve visual ambiguities and construct temporarily stable perceptual experiences. Temporal context through prior stimulation or internal brain states brought about by feedback from higher cortical processing levels may alter the response characteristics of specific neurons involved in rivalry resolution. Furthermore, spatial or crossmodal context may strengthen the neuronal representation of one of the possible perceptual interpretations and consequently bias the rivalry process towards it. We suggest that contextual influences on perceptual choices with ambiguous visual stimuli can be highly informative about the neuronal mechanisms of context-driven inference in the general processes of perceptual decision-making.  相似文献   

13.
The attentional modulation of sensory information processing in the visual system is the result of top-down influences, which can cause a multiplicative modulation of the firing rate of sensory neurons in extrastriate visual cortex, an effect reminiscent of the bottom-up effect of changes in stimulus contrast. This similarity could simply reflect the multiplicity of both effects. But, here we show that in direction-selective neurons in monkey visual cortical area MT, stimulus and attentional effects share a nonlinearity. These neurons show higher response gain for both contrast and attentional changes for intermediate contrast stimuli and smaller gain for low- and high-contrast stimuli. This finding suggests a close relationship between the neural encoding of stimulus contrast and the modulating effect of the behavioral relevance of stimuli.  相似文献   

14.
Cross-modal plasticity refers to the recruitment of cortical regions involved in the processing of one modality (e.g. vision) for processing other modalities (e.g. audition). The principles determining how and where cross-modal plasticity occurs remain poorly understood. Here, we investigate these principles by testing responses to auditory motion in visual motion area MT+ of congenitally blind and sighted individuals. Replicating previous reports, we find that MT+ as a whole shows a strong and selective responses to auditory motion in congenitally blind but not sighted individuals, suggesting that the emergence of this univariate response depends on experience. Importantly, however, multivoxel pattern analyses showed that MT+ contained information about different auditory motion conditions in both blind and sighted individuals. These results were specific to MT+ and not found in early visual cortex. Basic sensitivity to auditory motion in MT+ is thus experience-independent, which may be a basis for the region''s strong cross-modal recruitment in congenital blindness.  相似文献   

15.
Change blindness--our inability to detect large changes in natural scenes when saccades, blinks and other transients interrupt visual input--seems to contradict psychophysical evidence for our exquisite sensitivity to contrast changes. Can the type of effects described as ''change blindness'' be observed with simple, multi-element stimuli, amenable to psychophysical analysis? Such stimuli, composed of five mixed contrast elements, elicited a striking increase in contrast increment thresholds compared to those for an isolated element. Cue presentation prior to the stimulus substantially reduced thresholds, as for change blindness with natural scenes. On one hand, explanations for change blindness based on abstract and sketchy representations in short-term visual memory seem inappropriate for this low-level image property of contrast where there is ample evidence for exquisite performance on memory tasks. On the other hand, the highly increased thresholds for mixed contrast elements, and the decreased thresholds when a cue is present, argue against any simple early attentional or sensory explanation for change blindness. Thus, psychophysical results for very simple patterns cannot straightforwardly predict results even for the slightly more complicated patterns studied here.  相似文献   

16.
Visual fusion is the process in which differing but compatible binocular information is transformed into a unified percept. Even though this is at the basis of binocular vision, the underlying neural processes are, as yet, poorly understood. In our study we therefore aimed to investigate neural correlates of visual fusion. To this end, we presented binocularly compatible, fusible (BF), and incompatible, rivaling (BR) stimuli, as well as an intermediate stimulus type containing both binocularly fusible and monocular, incompatible elements (BFR). Comparing BFR stimuli with BF and BR stimuli, respectively, we were able to disentangle brain responses associated with either visual fusion or rivalry. By means of functional magnetic resonance imaging, we measured brain responses to these stimulus classes in the visual cortex, and investigated them in detail at various retinal eccentricities. Compared with BF stimuli, the response to BFR stimuli was elevated in visual cortical areas V1 and V2, but not in V3 and V4 – implying that the response to monocular stimulus features decreased from V1 to V4. Compared to BR stimuli, the response to BFR stimuli decreased with increasing eccentricity, specifically within V3 and V4. Taken together, it seems that although the processing of exclusively monocular information decreases from V1 to V4, the processing of binocularly fused information increases from earlier to later visual areas. Our findings suggest the presence of an inhibitory neural mechanism which, depending on the presence of fusion, acts differently on the processing of monocular information.  相似文献   

17.
Hasson U  Skipper JI  Nusbaum HC  Small SL 《Neuron》2007,56(6):1116-1126
Is there a neural representation of speech that transcends its sensory properties? Using fMRI, we investigated whether there are brain areas where neural activity during observation of sublexical audiovisual input corresponds to a listener's speech percept (what is "heard") independent of the sensory properties of the input. A target audiovisual stimulus was preceded by stimuli that (1) shared the target's auditory features (auditory overlap), (2) shared the target's visual features (visual overlap), or (3) shared neither the target's auditory or visual features but were perceived as the target (perceptual overlap). In two left-hemisphere regions (pars opercularis, planum polare), the target invoked less activity when it was preceded by the perceptually overlapping stimulus than when preceded by stimuli that shared one of its sensory components. This pattern of neural facilitation indicates that these regions code sublexical speech at an abstract level corresponding to that of the speech percept.  相似文献   

18.
Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.  相似文献   

19.

Background

A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices.

Methods

Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy.

Results

Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position.

Conclusions

Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections.  相似文献   

20.
The data concerning the cephalic phase of insulin secretion (CPIS) in human obesity are controversial. We investigated the effect of a variety of sensory challenges on CPIS in 17 non-diabetic obese patients (four males, 13 females, mean age 41.1 years, mean BMI 38.7). Water, saccharin, and lemon juice were used as oral stimuli, and a complete meal was simply presented as visual and olfactory stimulations. Twelve healthy normal-weight subjects (four men, eight women, mean age 39.9, mean BMI 22.5) also underwent oral stimulation as controls, and the patients who underwent the sight and smell stimulations were also tested for pancreatic polypeptide (PP) changes in order to verify the occurrence of truly cephalic reflex during the test. Insulin levels were measured before and after each stimulation (every min for the first 5 min, and then after 10, 20, and 30 min). None of the stimuli (saccharin, lemon juice or water retained in the mouth for 2 min and were then spat out; the combined and separate sight and smell of a meal for 2 min) led to a significant increase in insulin in the obese patients (except in the case of one woman after oral stimulation). The oral stimuli led to a variable CPIS in one female and three male controls. Despite the absence of CPIS, the five obese patients undergoing all three sensory stimulations related to the meal (combined sight and smell, sight alone and smell alone) showed an early and significant increase in plasma PP concentrations within the first 3 min; this was more pronounced after the combined than after the separate exposure. Although only preliminary, these results underline the variability but substantial lack of CPIS in obese patients, thus suggesting that it can be considered a relatively rare and unrelevant event even in the presence of a true brain-mediated reflex revealed by the rapid and consistent increase in PP found in our experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号