首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Jiang Y  He S 《Current biology : CB》2006,16(20):2023-2029
Perceiving faces is critical for social interaction. Evidence suggests that different neural pathways may be responsible for processing face identity and expression information. By using functional magnetic resonance imaging (fMRI), we measured brain responses when observers viewed neutral, fearful, and scrambled faces, either visible or rendered invisible through interocular suppression. The right fusiform face area (FFA), the right superior temporal sulcus (STS), and the amygdala responded strongly to visible faces. However, when face images became invisible, activity in FFA to both neutral and fearful faces was much reduced, although still measurable; activity in the STS was robust only to invisible fearful faces but not to neutral faces. Activity in the amygdala was equally strong in both the visible and invisible conditions to fearful faces but much weaker in the invisible condition for the neutral faces. In the invisible condition, amygdala activity was highly correlated with that of the STS but not with FFA. The results in the invisible condition support the existence of dissociable neural systems specialized for processing facial identity and expression information. When images are invisible, cortical responses may reflect primarily feed-forward visual-information processing and thus allow us to reveal the distinct functions of FFA and STS.  相似文献   

2.
Smooth pursuit eye movements provide a good model system for cerebellar studies of complex motor control in monkeys. First, the pursuit system exhibits predictive control along complex trajectories and this control improves with training. Second, the flocculus/paraflocculus region of the cerebellum appears to generate this control. Lesions impair pursuit and neural activity patterns are closely related to eye motion during complex pursuit. Importantly, neural responses lead eye motion during predictive pursuit and lag eye motion during non-predictable target motions that require visual control. The idea that flocculus/paraflocculus predictive control is non-visual is also supported by a lack of correlation between neural activity and retinal image motion during pursuit. Third, biologically accurate neural network models of the flocculus/paraflocculus allow the exploration and testing of pursuit mechanisms. Our current model can generate predictive control without visual input in a manner that is compatible with the extensive experimental data available for this cerebellar system. Similar types of non-visual cerebellar control are likely to facilitate the wide range of other skilled movements that are observed.  相似文献   

3.
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.  相似文献   

4.
Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals.  相似文献   

5.
Direction-sensitive partitioning of the honeybee optomotor system   总被引:1,自引:0,他引:1  
ABSTRACT. The horizontal motion-detecting system controlling optomotor head-turning behaviour in honeybees, Apis mellifera , was found to be partitioned into two separate subsystems. Each subsystem is direction-specific such that visual stimulation in the preferred direction elicited a high level of responses that correcly followed the movement, whereas stimulation in the non-preferred direction resulted in response levels comparable to or lower than those for blinded controls. The results indicate that medial eye regions are specialized for the detection of posterior-to-anterior movements and lateral regions are specialized for detecting anterior-to-posterior motion. A model suggesting possible neural correlates for this functional subdivision of the optomotor response is proposed.  相似文献   

6.
In primates, tracking eye movements help vision by stabilising onto the retinas the images of a moving object of interest. This sensorimotor transformation involves several stages of motion processing, from the local measurement of one-dimensional luminance changes up to the integration of first and higher-order local motion cues into a global two-dimensional motion immune to antagonistic motions arising from the surrounding. The dynamics of this surface motion segmentation is reflected into the various components of the tracking responses and its underlying neural mechanisms can be correlated with behaviour at both single-cell and population levels. I review a series of behavioural studies which demonstrate that the neural representation driving eye movements evolves over time from a fast vector average of the outputs of linear and non-linear spatio-temporal filtering to a progressive and slower accurate solution for global motion. Because of the sensitivity of earliest ocular following to binocular disparity, antagonistic visual motion from surfaces located at different depths are filtered out. Thus, global motion integration is restricted within the depth plane of the object to be tracked. Similar dynamics were found at the level of monkey extra-striate areas MT and MST and I suggest that several parallel pathways along the motion stream are involved albeit with different latencies to build-up this accurate surface motion representation. After 200-300 ms, most of the computational problems of early motion processing (aperture problem, motion integration, motion segmentation) are solved and the eye velocity matches the global object velocity to maintain a clear and steady retinal image.  相似文献   

7.
Brain responses to the acquired moral status of faces   总被引:13,自引:0,他引:13  
Singer T  Kiebel SJ  Winston JS  Dolan RJ  Frith CD 《Neuron》2004,41(4):653-662
We examined whether neural responses associated with judgments of socially relevant aspects of the human face extend to stimuli that acquire their significance through learning in a meaningful interactive context, specifically reciprocal cooperation. During fMRI, subjects made gender judgments on faces of people who had been introduced as fair (cooperators) or unfair (defector) players through repeated play of a sequential Prisoner's Dilemma game. To manipulate moral responsibility, players were introduced as either intentional or nonintentional agents. Our behavioral (likebility ratings and memory performance) as well as our imaging data confirm the saliency of social fairness for human interactions. Relative to neutral faces, faces of intentional cooperators engendered increased activity in left amygdala, bilateral insula, fusiform gyrus, STS, and reward-related areas. Our data indicate that rapid learning regarding the moral status of others is expressed in altered neural activity within a system associated with social cognition.  相似文献   

8.
It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly.  相似文献   

9.
Peelen MV  Wiggett AJ  Downing PE 《Neuron》2006,49(6):815-822
Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.  相似文献   

10.
Hietanen JK  Nummenmaa L 《PloS one》2011,6(11):e24408
Recent event-related potential studies have shown that the occipitotemporal N170 component--best known for its sensitivity to faces--is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior.  相似文献   

11.
The complexity of nervous systems alters the evolvability of behaviour. Complex nervous systems are phylogenetically constrained; nevertheless particular species-specific behaviours have repeatedly evolved, suggesting a predisposition towards those behaviours. Independently evolved behaviours in animals that share a common neural architecture are generally produced by homologous neural structures, homologous neural pathways and even in the case of some invertebrates, homologous identified neurons. Such parallel evolution has been documented in the chromatic sensitivity of visual systems, motor behaviours and complex social behaviours such as pair-bonding. The appearance of homoplasious behaviours produced by homologous neural substrates suggests that there might be features of these nervous systems that favoured the repeated evolution of particular behaviours. Neuromodulation may be one such feature because it allows anatomically defined neural circuitry to be re-purposed. The developmental, genetic and physiological mechanisms that contribute to nervous system complexity may also bias the evolution of behaviour, thereby affecting the evolvability of species-specific behaviour.  相似文献   

12.
Zhou H  Desimone R 《Neuron》2011,70(6):1205-1217
When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target.  相似文献   

13.
Pei YC  Hsiao SS  Craig JC  Bensmaia SJ 《Neuron》2011,69(3):536-547
How are local motion signals integrated to form a global motion percept? We investigate the neural mechanisms of tactile motion integration by presenting tactile gratings and plaids to the fingertips of monkeys, using the tactile analogue of a visual monitor and recording the responses evoked in somatosensory cortical neurons. The perceived directions of the gratings and plaids are measured in parallel psychophysical experiments. We identify a population of somatosensory neurons that exhibit integration properties comparable to those induced by analogous visual stimuli in area MT and find that these neural responses account for the perceived direction of the stimuli across all stimulus conditions tested. The preferred direction of the neurons and the perceived direction of the stimuli can be predicted from the weighted average of the directions of the individual stimulus features, highlighting that the somatosensory system implements a vector average mechanism to compute tactile motion direction that bears striking similarities to its visual counterpart.  相似文献   

14.
Visual stability     
Our vision remains stable even though the movements of our eyes, head and bodies create a motion pattern on the retina. One of the most important, yet basic, feats of the visual system is to correctly determine whether this retinal motion is owing to real movement in the world or rather our own self-movement. This problem has occupied many great thinkers, such as Descartes and Helmholtz, at least since the time of Alhazen. This theme issue brings together leading researchers from animal neurophysiology, clinical neurology, psychophysics and cognitive neuroscience to summarize the state of the art in the study of visual stability. Recently, there has been significant progress in understanding the limits of visual stability in humans and in identifying many of the brain circuits involved in maintaining a stable percept of the world. Clinical studies and new experimental methods, such as transcranial magnetic stimulation, now make it possible to test the causal role of different brain regions in creating visual stability and also allow us to measure the consequences when the mechanisms of visual stability break down.  相似文献   

15.
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours.  相似文献   

16.
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.  相似文献   

17.
Some cortical circuit models study the mechanisms of the transforms from visual inputs to neural responses. They model neural properties such as feature tunings, pattern sensitivities, and how they depend on intracortical connections and contextual inputs. Other cortical circuit models are more concerned with computational goals of the transform from visual inputs to neural responses, or the roles of the neural responses in the visual behavior. The appropriate complexity of a cortical circuit model depends on the question asked. Modeling neural circuits of many interacting hypercolumns is a necessary challenge, which is providing insights to cortical computations, such as visual saliency computation, and linking physiology with global visual cognitive behavior such as bottom-up attentional selection.  相似文献   

18.
Stratton P  Milford M  Wyeth G  Wiles J 《PloS one》2011,6(10):e25687
The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that 'grounding' of modelled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology), and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.  相似文献   

19.
Beauchamp MS  Lee KE  Haxby JV  Martin A 《Neuron》2002,34(1):149-159
We tested the hypothesis that different regions of lateral temporal cortex are specialized for processing different types of visual motion by studying the cortical responses to moving gratings and to humans and manipulable objects (tools and utensils) that were either stationary or moving with natural or artificially generated motions. Segregated responses to human and tool stimuli were observed in both ventral and lateral regions of posterior temporal cortex. Relative to ventral cortex, lateral temporal cortex showed a larger response for moving compared with static humans and tools. Superior temporal cortex preferred human motion, and middle temporal gyrus preferred tool motion. A greater response was observed in STS to articulated compared with unarticulated human motion. Specificity for different types of complex motion (in combination with visual form) may be an organizing principle in lateral temporal cortex.  相似文献   

20.
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号