首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Gaze following in human infants depends on communicative signals   总被引:1,自引:0,他引:1  
Humans are extremely sensitive to ostensive signals, like eye contact or having their name called, that indicate someone's communicative intention toward them [1-3]. Infants also pay attention to these signals [4-6], but it is unknown whether they appreciate their significance in the initiation of communicative acts. In two experiments, we employed video presentation of an actor turning toward one of two objects and recorded infants' gaze-following behavior [7-13] with eye-tracking techniques [11, 12]. We found that 6-month-old infants followed the adult's gaze (a potential communicative-referential signal) toward an object only when such an act is preceded by ostensive cues such as direct gaze (experiment 1) and infant-directed speech (experiment 2). Such a link between the presence of ostensive signals and gaze following suggests that this behavior serves a functional role in assisting infants to effectively respond to referential communication directed to them. Whereas gaze following in many nonhuman species supports social information gathering [14-18], in humans it initially appears to reflect the expectation of a more active, communicative role from the information source.  相似文献   

2.
The posterior parietal cortex has long been considered an ''association'' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author''s laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer''s movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

3.
Little is known about the brain mechanisms involved in word learning during infancy and in second language acquisition and about the way these new words become stable representations that sustain language processing. In several studies we have adopted the human simulation perspective, studying the effects of brain-lesions and combining different neuroimaging techniques such as event-related potentials and functional magnetic resonance imaging in order to examine the language learning (LL) process. In the present article, we review this evidence focusing on how different brain signatures relate to (i) the extraction of words from speech, (ii) the discovery of their embedded grammatical structure, and (iii) how meaning derived from verbal contexts can inform us about the cognitive mechanisms underlying the learning process. We compile these findings and frame them into an integrative neurophysiological model that tries to delineate the major neural networks that might be involved in the initial stages of LL. Finally, we propose that LL simulations can help us to understand natural language processing and how the recovery from language disorders in infants and adults can be accomplished.  相似文献   

4.
5.
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.  相似文献   

6.
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

A combination of psychophysics, computational modelling and fMRI reveals novel insights into how the brain controls the binding of information across the senses, such as the voice and lip movements of a speaker.  相似文献   

7.
Understanding the evolution of animal signals has to include consideration of the structure of signal and noise, and the sensory mechanisms that detect the signals. Considerable progress has been made in understanding sounds and colour signals, however, the degree to which movement-based signals are constrained by the particular patterns of environmental image motion is poorly understood. Here we have quantified the image motion generated by wind-blown plants at 12 sites in the coastal habitat of the Australian lizard Amphibolurus muricatus. Sampling across different plant communities and meteorological conditions revealed distinct image motion environments. At all locations, image motion became more directional and apparent speed increased as wind speeds increased. The magnitude of these changes and the spatial distribution of image motion, however, varied between locations probably as a function of plant structure and the topographic location. In addition, we show that the background motion noise depends strongly on the particular depth-structure of the environment and argue that such microhabitat differences suggest specific strategies to preserve signal efficacy. Movement-based signals and motion processing mechanisms, therefore, may reveal the same type of habitat specific structural variation that we see for signals from other modalities.  相似文献   

8.
Empathy reflects the natural ability to perceive and be sensitive to the emotional states of others, coupled with a motivation to care for their well-being. It has evolved in the context of parental care for offspring, as well as within kinship bonds, to help facilitate group living. In this paper, we integrate the perspectives of evolution, animal behaviour, developmental psychology, and social and clinical neuroscience to elucidate our understanding of the proximate mechanisms underlying empathy. We focus, in particular, on processing of signals of distress and need, and their relation to prosocial behaviour. The ability to empathize, both in animals and humans, mediates prosocial behaviour when sensitivity to others'' distress is paired with a drive towards their welfare. Disruption or atypical development of the neural circuits that process distress cues and integrate them with decision value leads to callous disregard for others, as is the case in psychopathy. The realization that basic forms of empathy exist in non-human animals is crucial for gaining new insights into the underlying neurobiological and genetic mechanisms of empathy, enabling translation towards therapeutic and pharmacological interventions.  相似文献   

9.
Recent evidence suggests that preverbal infants' gaze following can be triggered only if an actor's head turn is preceded by the expression of communicative intent [1]. Such connectedness between ostensive and referential signals may be uniquely human, enabling infants to effectively respond to referential communication directed to them. In the light of increasing evidence of dogs' social communicative skills [2], an intriguing question is whether dogs' responsiveness to human directional gestures [3] is associated with the situational context in an infant-like manner. Borrowing a method used in infant studies [1], dogs watched video presentations of a human actor turning toward one of two objects, and their eye-gaze patterns were recorded with an eye tracker. Results show a higher tendency of gaze following in dogs when the human's head turning was preceded by the expression of communicative intent (direct gaze, addressing). This is the first evidence to show that (1) eye-tracking techniques can be used for studying dogs' social skills and (2) the exploitation of human gaze cues depends on the communicatively relevant pattern of ostensive and referential signals in dogs. Our findings give further support to the existence of a functionally infant-analog social competence in this species.  相似文献   

10.
解码大脑在语音处理过程中涉及的信息加工层级结构、皮质响应机制及功能连接模式,是神经语言学领域的研究重点.以语音信息加工时序为依据,可将该认知过程划分为:初级声学信号时频编码(spectrotemporal analysis of primary acoustic signals)、音素处理(phonemic processing)以及词汇-语义加工(lexical-semantic processing) 3个处理阶段.目前,研究者已对各阶段的神经机制进行了广泛且深入的研究,但不同模型理论/假说难以整合互补,有必要进行梳理与总结.本文将以大脑处理语音信息的3个阶段为主线,以电生理学方法为侧重范式,对各阶段下的皮质映射、神经振荡模式以及事件相关响应机制等神经基础研究现状进行总结评述,以期为进一步了解语音信号如何在人脑中进行处理和表达等相关研究提供一定的参考.  相似文献   

11.
Although infant speech perception in often studied in isolated modalities, infants'' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants’ sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants’ looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.  相似文献   

12.
A prevailing theory proposes that the brain''s two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers'' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals'' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers'' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways.  相似文献   

13.
Accurate motion perception of self and object speed is crucial for successful interaction in the world. The context in which we make such speed judgments has a profound effect on their accuracy. Misperceptions of motion speed caused by the context can have drastic consequences in real world situations, but they also reveal much about the underlying mechanisms of motion perception. Here we show that motion signals suppressed from awareness can warp simultaneous conscious speed perception. In Experiment 1, we measured global speed discrimination thresholds using an annulus of 8 local Gabor elements. We show that physically removing local elements from the array attenuated global speed discrimination. However, removing awareness of the local elements only had a small effect on speed discrimination. That is, unconscious local motion elements contributed to global conscious speed perception. In Experiment 2 we measured the global speed of the moving Gabor patterns, when half the elements moved at different speeds. We show that global speed averaging occurred regardless of whether local elements were removed from awareness, such that the speed of invisible elements continued to be averaged together with the visible elements to determine the global speed. These data suggest that contextual motion signals outside of awareness can both boost and affect our experience of motion speed, and suggest that such pooling of motion signals occurs before the conscious extraction of the surround motion speed.  相似文献   

14.
Infants are known to possess two different cognitive systems to encode numerical information. The first system encodes approximate numerosities, has no known upper limit and is functional from birth on. The second system relies on infants’ ability to track up to 3 objects in parallel, and enables them to represent exact numerosity for such small sets. It is unclear, however, whether infants may be able to represent numerosities from all ranges in a common format. In various studies, infants failed to discriminate a small vs. a large numerosity (e.g., 2 vs. 4, 3 vs. 6), although more recent studies presented evidence that infants can succeed at these discriminations in some situations. Here, we used a transfer paradigm between the tactile and visual modalities in 5-month-olds, assuming that such cross-modal paradigm may promote access to abstract representations of numerosities, continuous across the small and large ranges. Infants were first familiarized with 2 to 4 objects in the tactile modality, and subsequently tested for their preference between 2 vs. 4, or 3 vs. 6 visual objects. Results were mixed, with only partial evidence that infants may have transferred numerical information across modalities. Implications on 5-month-old infants’ ability to represent small and large numerosities in a single or in separate formats are discussed.  相似文献   

15.
Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object''s features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process - while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200–400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.  相似文献   

16.
Animal actions are almost universally constrained by the bilateral body-plan. For example, the direction of travel tends to be constrained by the orientation of the animal''s anteroposterior axis. Hence, an animal''s behaviour can reliably guide the identification of its front and back, and its orientation can reliably guide action prediction. We examine the hypothesis that the evolutionarily ancient relation between anteroposterior body-structure and behaviour guides our cognitive processing of agents and their actions. In a series of studies, we demonstrate that, after limited exposure, human infants as young as six months of age spontaneously encode a novel agent as having a certain axial direction with respect to its actions and rely on it when anticipating the agent''s further behaviour. We found that such encoding is restricted to objects exhibiting cues of agency and does not depend on generalization from features of familiar animals. Our research offers a new tool for investigating the perception of animate agency and supports the proposal that the underlying cognitive mechanisms have been shaped by basic biological adaptations in humans.  相似文献   

17.
Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy.  相似文献   

18.

Background

Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces.

Methodology/Principal Findings

In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches.

Conclusions/Significance

A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals which treat localized junction configurations as 2D image features, we link them to mechanisms of apparent surface segregation. As a consequence, we demonstrate how junctions can change their perceptual representation depending on the scene context and the spatial configuration of boundary fragments.  相似文献   

19.
Retinal networks must adapt constantly to best present the ever changing visual world to the brain. Here we test the hypothesis that adaptation is a result of different mechanisms at several synaptic connections within the network. In a companion paper (Part I), we showed that adaptation in the photoreceptors (R1–R6) and large monopolar cells (LMC) of the Drosophila eye improves sensitivity to under-represented signals in seconds by enhancing both the amplitude and frequency distribution of LMCs'' voltage responses to repeated naturalistic contrast series. In this paper, we show that such adaptation needs both the light-mediated conductance and feedback-mediated synaptic conductance. A faulty feedforward pathway in histamine receptor mutant flies speeds up the LMC output, mimicking extreme light adaptation. A faulty feedback pathway from L2 LMCs to photoreceptors slows down the LMC output, mimicking dark adaptation. These results underline the importance of network adaptation for efficient coding, and as a mechanism for selectively regulating the size and speed of signals in neurons. We suggest that concert action of many different mechanisms and neural connections are responsible for adaptation to visual stimuli. Further, our results demonstrate the need for detailed circuit reconstructions like that of the Drosophila lamina, to understand how networks process information.  相似文献   

20.
The early gesturing of six bonobos, eight chimpanzees, three gorillas, and eight orangutans was systematically documented using focal animal sampling. Apes' were observed during their first 20 months of life in an effort to investigate: (i) the onset of gesturing; (ii) the order in which signals of different sensory modalities appear; (iii) the extent to which infants make use of these modalities in their early signaling; and (iv) the behavioral contexts where signals are employed. Orangutans differed in important gestural characteristics to African ape species. Most notably, they showed the latest gestural onset and were more likely to use their early signals in food-related interactions. Tactile and visual signals appeared similarly early across all four species. In African apes, however, visual signaling gained prominence over time while tactile signaling decreased. These findings suggest that motor ability, which encourages independence from caregivers, is an important antecedent, among others, in gestural onset and development, a finding which warrants further investigation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号