首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cumulative psychophysical evidence suggests that the shape of closed contours is analysed by means of their radial frequency components (RFC). However, neurophysiological evidence for RFC-based representations is still missing. We investigated the representation of radial frequency in the human visual cortex with functional magnetic resonance imaging. We parametrically varied the radial frequency, amplitude and local curvature of contour shapes. The stimuli evoked clear responses across visual areas in the univariate analysis, but the response magnitude did not depend on radial frequency or local curvature. Searchlight-based, multivariate representational similarity analysis revealed RFC specific response patterns in areas V2d, V3d, V3AB, and IPS0. Interestingly, RFC-specific representations were not found in hV4 or LO, traditionally associated with visual shape analysis. The modulation amplitude of the shapes did not affect the responses in any visual area. Local curvature, SF-spectrum and contrast energy related representations were found across visual areas but without similar specificity for visual area that was found for RFC. The results suggest that the radial frequency of a closed contour is one of the cortical shape analysis dimensions, represented in the early and mid-level visual areas.  相似文献   

2.
Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.  相似文献   

3.
Sigman M  Pan H  Yang Y  Stern E  Silbersweig D  Gilbert CD 《Neuron》2005,46(5):823-835
Learning in shape identification led to global changes in activation across the entire visual pathway, as revealed with whole-brain fMRI. Following extensive training in a shape identification task, brain activity associated with trained shapes relative to the untrained shapes showed: (1) an increased level of activity in retinotopic cortex (RC), (2) a decrease in activation of the lateral occipital cortex (LO), and (3) a decrease in the dorsal attentional network. In addition, RC activations became more correlated (and LO activation, less correlated) with performance. When comparing target-present and target-absent trials within the trained condition, we observed a similar decrease in the dorsal attentional network but not in the visual cortices. These findings indicate a large-scale reorganization of activity in the visual pathway as a result of learning, with the RC becoming more involved (and the LO, less involved) and that these changes are triggered in a top-down manner depending on the perceptual task performed.  相似文献   

4.
Our understanding of multisensory integration has advanced because of recent functional neuroimaging studies of three areas in human lateral occipito-temporal cortex: superior temporal sulcus, area LO and area MT (V5). Superior temporal sulcus is activated strongly in response to meaningful auditory and visual stimuli, but responses to tactile stimuli have not been well studied. Area LO shows strong activation in response to both visual and tactile shape information, but not to auditory representations of objects. Area MT, an important region for processing visual motion, also shows weak activation in response to tactile motion, and a signal that drops below resting baseline in response to auditory motion. Within superior temporal sulcus, a patchy organization of regions is activated in response to auditory, visual and multisensory stimuli. This organization appears similar to that observed in polysensory areas in macaque superior temporal sulcus, suggesting that it is an anatomical substrate for multisensory integration. A patchy organization might also be a neural mechanism for integrating disparate representations within individual sensory modalities, such as representations of visual form and visual motion.  相似文献   

5.
Hierarchical processing of tactile shape in the human brain   总被引:5,自引:0,他引:5  
It is not known exactly which cortical areas compute somatosensory representations of shape. This was investigated using positron emission tomography and cytoarchitectonic mapping. Volunteers discriminated shapes by passive or active touch, brush velocity, edge length, curvature, and roughness. Discrimination of shape by active touch, as opposed to passive touch, activated the right anterior lobe of cerebellum only. Areas 3b and 1 were activated by all stimuli. Area 2 was activated with preference for surface curvature changes and shape stimuli. The anterior part of the supramarginal gyrus (ASM) and the cortex lining the intraparietal sulcus (IPA) were activated by active and passive shape discrimination, but not by other mechanical stimuli. We suggest, based on these findings, that somatosensory representations of shape are computed by areas 3b, 1, 2, IPA, and ASM in this hierarchical fashion.  相似文献   

6.
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations.  相似文献   

7.
Frequency modulation (FM) is a basic constituent of vocalisation in many animals as well as in humans. In human speech, short rising and falling FM-sweeps of around 50 ms duration, called formant transitions, characterise individual speech sounds. There are two representations of FM in the ascending auditory pathway: a spectral representation, holding the instantaneous frequency of the stimuli; and a sweep representation, consisting of neurons that respond selectively to FM direction. To-date computational models use feedforward mechanisms to explain FM encoding. However, from neuroanatomy we know that there are massive feedback projections in the auditory pathway. Here, we found that a classical FM-sweep perceptual effect, the sweep pitch shift, cannot be explained by standard feedforward processing models. We hypothesised that the sweep pitch shift is caused by a predictive feedback mechanism. To test this hypothesis, we developed a novel model of FM encoding incorporating a predictive interaction between the sweep and the spectral representation. The model was designed to encode sweeps of the duration, modulation rate, and modulation shape of formant transitions. It fully accounted for experimental data that we acquired in a perceptual experiment with human participants as well as previously published experimental results. We also designed a new class of stimuli for a second perceptual experiment to further validate the model. Combined, our results indicate that predictive interaction between the frequency encoding and direction encoding neural representations plays an important role in the neural processing of FM. In the brain, this mechanism is likely to occur at early stages of the processing hierarchy.  相似文献   

8.
Sixteen experimentally naive adult Barbary doves (S. risoria) learned a successive discrimination between a simple and a complex shape and were then tested for transfer of training in a generalization test with seven unfamiliar shapes. It was found that the degree of equivalence between shapes was correlated with two physical measures of shape based on contour length and on the number of sides of the shape. These results are in general agreement with other comparative data including those for humans. Results of the generalization test showed an ‘asymmetrical’ response in that judgments of shape similarity were not elicited with equal frequency in all subjects. This was interpreted as evidence for an internal standard in that subjects compared test stimuli to the positive training shape.  相似文献   

9.
Mammals have adapted to a variety of natural environments from underwater to aerial and these different adaptations have affected their specific perceptive and cognitive abilities. This study used a computer-controlled touchscreen system to examine the visual discrimination abilities of horses, particularly regarding size and shape, and compared the results with those from chimpanzee, human and dolphin studies. Horses were able to discriminate a difference of 14% in circle size but showed worse discrimination thresholds than chimpanzees and humans; these differences cannot be explained by visual acuity. Furthermore, the present findings indicate that all species use length cues rather than area cues to discriminate size. In terms of shape discrimination, horses exhibited perceptual similarities among shapes with curvatures, vertical/horizontal lines and diagonal lines, and the relative contributions of each feature to perceptual similarity in horses differed from those for chimpanzees, humans and dolphins. Horses pay more attention to local components than to global shapes.  相似文献   

10.
11.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.  相似文献   

12.
The question of how local image features on the retina are integrated into perceived global shapes is central to our understanding of human visual perception. Psychophysical investigations have suggested that the emergence of a coherent visual percept, or a "good-Gestalt", is mediated by the perceptual organization of local features based on their similarity. However, the neural mechanisms that mediate unified shape perception in the human brain remain largely unknown. Using human fMRI, we demonstrate that not only higher occipitotemporal but also early retinotopic areas are involved in the perceptual organization and detection of global shapes. Specifically, these areas showed stronger fMRI responses to global contours consisting of collinear elements than to patterns of randomly oriented local elements. More importantly, decreased detection performance and fMRI activations were observed when misalignment of the contour elements disturbed the perceptual coherence of the contours. However, grouping of the misaligned contour elements by disparity resulted in increased performance and fMRI activations, suggesting that similar neural mechanisms may underlie grouping of local elements to global shapes by different visual features (orientation or disparity). Thus, these findings provide novel evidence for the role of both early feature integration processes and higher stages of visual analysis in coherent visual perception.  相似文献   

13.
Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.  相似文献   

14.
When dealing with natural scenes, sensory systems have to process an often messy and ambiguous flow of information. A stable perceptual organization nevertheless has to be achieved in order to guide behavior. The neural mechanisms involved can be highlighted by intrinsically ambiguous situations. In such cases, bistable perception occurs: distinct interpretations of the unchanging stimulus alternate spontaneously in the mind of the observer. Bistable stimuli have been used extensively for more than two centuries to study visual perception. Here we demonstrate that bistable perception also occurs in the auditory modality. We compared the temporal dynamics of percept alternations observed during auditory streaming with those observed for visual plaids and the susceptibilities of both modalities to volitional control. Strong similarities indicate that auditory and visual alternations share common principles of perceptual bistability. The absence of correlation across modalities for subject-specific biases, however, suggests that these common principles are implemented at least partly independently across sensory modalities. We propose that visual and auditory perceptual organization could rely on distributed but functionally similar neural competition mechanisms aimed at resolving sensory ambiguities.  相似文献   

15.
Human observers tend to group oriented line segments into full contours if they follow the Gestalt rule of ''good continuation''. It is commonly assumed that contour grouping emerges automatically in early visual cortex. In contrast, recent work in animal models suggests that contour grouping requires learning and thus involves top-down control from higher brain structures. Here we explore mechanisms of top-down control in perceptual grouping by investigating synchronicity within EEG oscillations. Human participants saw two micro-Gabor arrays in a random order, with the task to indicate whether the first (S1) or the second stimulus (S2) contained a contour of collinearly aligned elements. Contour compared to non-contour S1 produced a larger posterior post-stimulus beta power (15–21 Hz). Contour S2 was associated with a pre-stimulus decrease in posterior alpha power (11–12 Hz) and in fronto-posterior theta (4–5 Hz) phase couplings, but not with a post-stimulus increase in beta power. The results indicate that subjects used prior knowledge from S1 processing for S2 contour grouping. Expanding previous work on theta oscillations, we propose that long-range theta synchrony shapes neural responses to perceptual groupings regulating lateral inhibition in early visual cortex.  相似文献   

16.
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1-V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings.  相似文献   

17.
18.
We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patterns from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.  相似文献   

19.
Categorical perception is a process by which a continuous stimulus space is partitioned to represent discrete sensory events. Early experience has been shown to shape categorical perception and enlarge cortical representations of experienced stimuli in the sensory cortex. The present study examines the hypothesis that enlargement in cortical stimulus representations is a mechanism of categorical perception. Perceptual discrimination and identification behaviors were analyzed in model auditory cortices that incorporated sound exposure-induced plasticity effects. The model auditory cortex with over-representations of specific stimuli exhibited categorical perception behaviors for those specific stimuli. These results indicate that enlarged stimulus representations in the sensory cortex may be a mechanism for categorical perceptual learning.  相似文献   

20.
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

A combination of psychophysics, computational modelling and fMRI reveals novel insights into how the brain controls the binding of information across the senses, such as the voice and lip movements of a speaker.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号