首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study demonstrates the ability of blind (previously sighted) and blindfolded (sighted) subjects in reconstructing and identifying a number of visual targets transformed into equivalent musical representations. Visual images are deconstructed through a process which selectively segregates different features of the image into separate packages. These are then encoded in sound and presented as a polyphonic musical melody which resembles a Baroque fugue with many voices, allowing subjects to analyse the component voices selectively in combination, or separately in sequence, in a manner which allows a subject to patch together and bind the different features of the object mentally into a mental percept of a single recognizable entity. The visual targets used in this study included a variety of geometrical figures, simple high-contrast line drawings of man-made objects, natural and urban scenes, etc., translated into sound and presented to the subject in polyphonic musical form.  相似文献   

2.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound''s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.  相似文献   

3.
In a stereoscopic system both eyes or cameras have a slightly different view. As a consequence small variations between the projected images exist ("disparities") which are spatially evaluated in order to retrieve depth information. We will show that two related algorithmic versions can be designed which recover disparity. Both approaches are based on the comparison of filter outputs from filtering the left and the right image. The difference of the phase components between left and right filter responses encodes the disparity. One approach uses regular Gabor filters and computes the spatial phase differences in a conventional way as described already in 1988 by Sanger. Novel to this approach, however, is that we formulate it in a way which is fully compatible with neural operations in the visual cortex. The second approach uses the apparently paradoxical similarity between the analysis of visual disparities and the determination of the azimuth of a sound source. Animals determine the direction of the sound from the temporal delay between the left and right ear signals. Similarly, in our second approach we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits video real-time analysis of stereo image sequences (see movies at http://www.neurop.ruhr-uni-bochum.de/Real- Time-Stereo) and a FPGA-based PC-board has been developed which performs stereo-analysis at full PAL resolution in video real-time. An ASIC chip will be available in March 2000.  相似文献   

4.
Can video images imitate real stimuli in animal behaviour experiments?   总被引:4,自引:0,他引:4  
The use of video images in place of natural stimuli in animal behaviour experiments is reviewed. Unlike most other artificial means of stimulus presentation, video stimuli can depict complex moving objects such as other animals, preserving the temporal and spatial patterns of movement precisely as well as colour and sounds for repeated playback. Computer editing can give flexibility and control over all elements of the stimulus. A variety of limitations of video image presentation are also considered. Televisions and video monitors are designed with human vision in mind, and some non-human animals that differ in aspects of visual processing such as their colour vision, critical flicker-fusion threshold, perception of depth and visual acuity, may perceive video images differently to ourselves. The failure of video stimuli to interact with subjects can be a drawback for some studies. For video to be useful, it is important to confirm that the subject animal responds to the image in a comparable way to the real stimulus, and the criteria used to assess this are discussed. Finally, the contribution made by video studies to date in the understanding of animal visual responses is considered, and recommendations as to the future uses of video are made.  相似文献   

5.
Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations.  相似文献   

6.
Colour and greyscale (black and white) pictures look different to us, but it is not clear whether the difference in appearance is a consequence of the way our visual system uses colour signals or a by-product of our experience. In principle, colour images are qualitatively different from greyscale images because they make it possible to use different processing strategies. Colour signals provide important cues for segmenting the image into areas that represent different objects and for linking together areas that represent the same object. If this property of colour signals is exploited in visual processing we would expect colour stimuli to look different, as a class, from greyscale stimuli. We would also expect that adding colour signals to greyscale signals should change the way that those signals are processed. We have investigated these questions in behavioural and in physiological experiments. We find that male marmosets (all of which are dichromats) rapidly learn to distinguish between colour and greyscale copies of the same images. The discrimination transfers to new image pairs, to new colours and to image pairs in which the colour and greyscale images are spatially different. We find that, in a proportion of neurons recorded in the marmoset visual cortex, colour-shifts in opposite directions produce similar enhancements of the response to a luminance stimulus. We conclude that colour is, both behaviourally and physiologically, a distinctive property of images.  相似文献   

7.
Compared to most other forms of visually-guided motor activity, drawing is unique in that it “leaves a trail behind” in the form of the emanating image. We took advantage of an MRI-compatible drawing tablet in order to examine both the motor production and perceptual emanation of images. Subjects participated in a series of mark making tasks in which they were cued to draw geometric patterns on the tablet''s surface. The critical comparison was between when visual feedback was displayed (image generation) versus when it was not (no image generation). This contrast revealed an occipito-parietal stream involved in motion-based perception of the emerging image, including areas V5/MT+, LO, V3A, and the posterior part of the intraparietal sulcus. Interestingly, when subjects passively viewed animations of visual patterns emerging on the projected surface, all of the sensorimotor network involved in drawing was strongly activated, with the exception of the primary motor cortex. These results argue that the origin of the human capacity to draw and write involves not only motor skills for tool use but also motor-sensory links between drawing movements and the visual images that emanate from them in real time.  相似文献   

8.
By learning to discriminate among visual stimuli, human observers can become experts at specific visual tasks. The same is true for Rhesus monkeys, the major animal model of human visual perception. Here, we systematically compare how humans and monkeys solve a simple visual task. We trained humans and monkeys to discriminate between the members of small natural-image sets. We employed the "Bubbles" procedure to determine the stimulus features used by the observers. On average, monkeys used image features drawn from a diagnostic region covering about 7% +/- 2% of the images. Humans were able to use image features drawn from a much larger diagnostic region covering on average 51% +/- 4% of the images. Similarly for the two species, however, about 2% of the image needed to be visible within the diagnostic region on any individual trial for correct performance. We characterize the low-level image properties of the diagnostic regions and discuss individual differences among the monkeys. Our results reveal that monkeys base their behavior on confined image patches and essentially ignore a large fraction of the visual input, whereas humans are able to gather visual information with greater flexibility from large image regions.  相似文献   

9.
Many structural and functional brain alterations accompany blindness, with substantial individual variation in these effects. In normally sighted people, there is correlated individual variation in some visual pathway structures. Here we examined if the changes in brain anatomy produced by blindness alter the patterns of anatomical variation found in the sighted. We derived eight measures of central visual pathway anatomy from a structural image of the brain from 59 sighted and 53 blind people. These measures showed highly significant differences in mean size between the sighted and blind cohorts. When we examined the measurements across individuals within each group we found three clusters of correlated variation, with V1 surface area and pericalcarine volume linked, and independent of the thickness of V1 cortex. These two clusters were in turn relatively independent of the volumes of the optic chiasm and lateral geniculate nucleus. This same pattern of variation in visual pathway anatomy was found in the sighted and the blind. Anatomical changes within these clusters were graded by the timing of onset of blindness, with those subjects with a post-natal onset of blindness having alterations in brain anatomy that were intermediate to those seen in the sighted and congenitally blind. Many of the blind and sighted subjects also contributed functional MRI measures of cross-modal responses within visual cortex, and a diffusion tensor imaging measure of fractional anisotropy within the optic radiations and the splenium of the corpus callosum. We again found group differences between the blind and sighted in these measures. The previously identified clusters of anatomical variation were also found to be differentially related to these additional measures: across subjects, V1 cortical thickness was related to cross-modal activation, and the volume of the optic chiasm and lateral geniculate was related to fractional anisotropy in the visual pathway. Our findings show that several of the structural and functional effects of blindness may be reduced to a smaller set of dimensions. It also seems that the changes in the brain that accompany blindness are on a continuum with normal variation found in the sighted.  相似文献   

10.
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.  相似文献   

11.
The middle temporal complex (MT/MST) is a brain region specialized for the perception of motion in the visual modality. However, this specialization is modified by visual experience: after long-standing blindness, MT/MST responds to sound. Recent evidence also suggests that the auditory response of MT/MST is selective for motion. The developmental time course of this plasticity is not known. To test for a sensitive period in MT/MST development, we used fMRI to compare MT/MST function in congenitally blind, late-blind, and sighted adults. MT/MST responded to sound in congenitally blind adults, but not in late-blind or sighted adults, and not in an individual who lost his vision between ages of 2 and 3 years. All blind adults had reduced functional connectivity between MT/MST and other visual regions. Functional connectivity was increased between MT/MST and lateral prefrontal areas in congenitally blind relative to sighted and late-blind adults. These data suggest that early blindness affects the function of feedback projections from prefrontal cortex to MT/MST. We conclude that there is a sensitive period for visual specialization in MT/MST. During typical development, early visual experience either maintains or creates a vision-dominated response. Once established, this response profile is not altered by long-standing blindness.  相似文献   

12.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

13.
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems.  相似文献   

14.
15.
Visual experience plays an important role in the development of the visual cortex; however, recent functional imaging studies have shown that the functional organization is preserved in several higher-tier visual areas in congenitally blind subjects, indicating that maturation of visual areas depend unequally on visual experience. In this study, we aim to validate this hypothesis using a multimodality MRI approach. We found increased cortical thickness in the congenitally blind was present in the early visual areas and absent in the higher-tier ones, suggesting that the structural development of the visual cortex depends hierarchically on visual experience. In congenitally blind subjects, the decreased resting-state functional connectivity with the primary somatosensory cortex was more prominent in the early visual areas than in the higher-tier ones and were more pronounced in the ventral stream than in the dorsal one, suggesting that the development of functional organization of the visual cortex also depends differently on visual experience. Moreover, congenitally blind subjects showed normal or increased functional connectivity between ipsilateral higher-tier and early visual areas, suggesting an indirect corticocortical pathway through which somatosenroy information can reach the early visual areas. These findings support our hypothesis that the development of visual areas depends differently on visual experience.  相似文献   

16.
Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.  相似文献   

17.
The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and ‘more abstract’ representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as ‘action’ the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network.  相似文献   

18.
The study of blind individuals provides insight into the brain re-organization and behavioral compensations that occur following sensory deprivation. While behavioral studies have yielded conflicting results in terms of performance levels within the remaining senses, deafferentation of visual cortical areas through peripheral blindness results in clear neuroplastic changes. Most striking is the activation of occipital cortex in response to auditory and tactile stimulation. Indeed, parts of the "unimodal" visual cortex are recruited by other sensory modalities to process sensory information in a functionally relevant manner. In addition, a larger area of the sensorimotor cortex is devoted to the representation of the reading finger in blind Braille readers. The "visual" function of the deafferented occipital cortex is also altered, where transcranial magnetic stimulation-induced phosphenes can be elicited in only 20% of blind subjects. The neural mechanisms underlying these changes remain elusive but recent data showing rapid cross-modal plasticity in blindfolded, sighted subjects argue against the establishment of new connections to explain cross-modal interactions in the blind. Rather, latent pathways that participate in multisensory percepts in sighted subjects might be unmasked and may be potentiated in the event of complete loss of visual input. These issues have important implications for the development of visual prosthesis aimed at restoring some degree of vision in the blind.  相似文献   

19.
We report a novel effect in which the visual perception of eye-gaze and arrow cues change the way we perceive sound. In our experiments, subjects first saw an arrow or gazing face, and then heard a brief sound originating from one of six locations. Perceived sound origins were shifted in the direction indicated by the arrows or eye-gaze. This perceptual shift was equivalent for both arrows and gazing faces and was unaffected by facial expression, consistent with a generic, supramodal attentional influence by exogenous cues.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号