首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent studies have shown that infants’ face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants’ ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants’ face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants’ ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants’ attention, interfering with the recognition process and preventing the infants’ preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.  相似文献   

2.
Previous studies have shown that one’s prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face / house categorization task; Experiment 1) or a visual attention task (i.e. the global / local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical ‘global-to-local’ interference effect, whereas believers in conspiracy theories were characterized by a stronger ‘local-to-global interference effect’. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes.  相似文献   

3.
The sense of touch provides fundamental information about the surrounding world, and feedback about our own actions. Although touch is very important during the earliest stages of life, to date no study has investigated infants’ abilities to process visual stimuli implying touch. This study explores the developmental origins of the ability to visually recognize touching gestures involving others. Looking times and orienting responses were measured in a visual preference task, in which participants were simultaneously presented with two videos depicting a touching and a no-touching gesture involving human body parts (face, hand) and/or an object (spoon). In Experiment 1, 2-day-old newborns and 3-month-old infants viewed two videos: in one video a moving hand touched a static face, in the other the moving hand stopped before touching it. Results showed that only 3-month-olds, but not newborns, differentiated the touching from the no-touching gesture, displaying a preference for the former over the latter. To test whether newborns could manifest a preferential visual response when the touched body part is different from the face, in Experiment 2 newborns were presented with touching/no-touching gestures in which a hand or an inanimate object—i.e., a spoon- moved towards a static hand. Newborns were able to discriminate a hand-to-hand touching gesture, but they did not manifest any preference for the object-to-hand touch. The present findings speak in favour of an early ability to visually recognize touching gestures involving the interaction between human body parts.  相似文献   

4.
Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person''s gaze to that same object. That is, contrary to ‘gaze following’, attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that ‘follows’ the participant''s gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one''s gaze has been followed, in order to establish ‘shared attention’ and maintain the ongoing interaction.  相似文献   

5.
We respond more quickly to our own face than to other faces, but there is debate over whether this is connected to attention-grabbing properties of the self-face. In two experiments, we investigate whether the self-face selectively captures attention, and the attentional conditions under which this might occur. In both experiments, we examined whether different types of face (self, friend, stranger) provide differential levels of distraction when processing self, friend and stranger names. In Experiment 1, an image of a distractor face appeared centrally – inside the focus of attention – behind a target name, with the faces either upright or inverted. In Experiment 2, distractor faces appeared peripherally – outside the focus of attention – in the left or right visual field, or bilaterally. In both experiments, self-name recognition was faster than other name recognition, suggesting a self-referential processing advantage. The presence of the self-face did not cause more distraction in the naming task compared to other types of face, either when presented inside (Experiment 1) or outside (Experiment 2) the focus of attention. Distractor faces had different effects across the two experiments: when presented inside the focus of attention (Experiment 1), self and friend images facilitated self and friend naming, respectively. This was not true for stranger stimuli, suggesting that faces must be robustly represented to facilitate name recognition. When presented outside the focus of attention (Experiment 2), no facilitation occurred. Instead, we report an interesting distraction effect caused by friend faces when processing strangers’ names. We interpret this as a “social importance” effect, whereby we may be tuned to pick out and pay attention to familiar friend faces in a crowd. We conclude that any speed of processing advantages observed in the self-face processing literature are not driven by automatic attention capture.  相似文献   

6.
Recent studies have provided evidence that labeling can influence the outcome of infants’ visual categorization. However, what exactly happens during learning remains unclear. Using eye-tracking, we examined infants’ attention to object parts during learning. Our analysis of looking behaviors during learning provide insights going beyond merely observing the learning outcome. Both labeling and non-labeling phrases facilitated category formation in 12-month-olds but not 8-month-olds (Experiment 1). Non-linguistic sounds did not produce this effect (Experiment 2). Detailed analyses of infants’ looking patterns during learning revealed that only infants who heard labels exhibited a rapid focus on the object part successive exemplars had in common. Although other linguistic stimuli may also be beneficial for learning, it is therefore concluded that labels have a unique impact on categorization.  相似文献   

7.
Huang TR  Watanabe T 《PloS one》2012,7(4):e35946
Attention plays a fundamental role in visual learning and memory. One highly established principle of visual attention is that the harder a central task is, the more attentional resources are used to perform the task and the smaller amount of attention is allocated to peripheral processing because of limited attention capacity. Here we show that this principle holds true in a dual-task setting but not in a paradigm of task-irrelevant perceptual learning. In Experiment 1, eight participants were asked to identify either bright or dim number targets at the screen center and to remember concurrently presented scene backgrounds. Their recognition performances for scenes paired with dim/hard targets were worse than those for scenes paired with bright/easy targets. In Experiment 2, eight participants were asked to identify either bright or dim letter targets at the screen center while a task-irrelevant coherent motion was concurrently presented in the background. After five days of training on letter identification, participants improved their motion sensitivity to the direction paired with hard/dim targets improved but not to the direction paired with easy/bright targets. Taken together, these results suggest that task-irrelevant stimuli are not subject to the attentional control mechanisms that task-relevant stimuli abide.  相似文献   

8.
Radial expanding optic flow is a visual consequence of forward locomotion. Presented on screen, it generates illusionary forward self-motion, pointing at a close vision-gait interrelation. As particularly parkinsonian gait is vulnerable to external stimuli, effects of optic flow on motor-related cerebral circuitry were explored with functional magnetic resonance imaging in healthy controls (HC) and patients with Parkinson’s disease (PD). Fifteen HC and 22 PD patients, of which 7 experienced freezing of gait (FOG), watched wide-field flow, interruptions by narrowing or deceleration and equivalent control conditions with static dots. Statistical parametric mapping revealed that wide-field flow interruption evoked activation of the (pre-)supplementary motor area (SMA) in HC, which was decreased in PD. During wide-field flow, dorsal occipito-parietal activations were reduced in PD relative to HC, with stronger functional connectivity between right visual motion area V5, pre-SMA and cerebellum (in PD without FOG). Non-specific ‘changes’ in stimulus patterns activated dorsolateral fronto-parietal regions and the fusiform gyrus. This attention-associated network was stronger activated in HC than in PD. PD patients thus appeared compromised in recruiting medial frontal regions facilitating internally generated virtual locomotion when visual motion support falls away. Reduced dorsal visual and parietal activations during wide-field optic flow in PD were explained by impaired feedforward visual and visuomotor processing within a magnocellular (visual motion) functional chain. Compensation of impaired feedforward processing by distant fronto-cerebellar circuitry in PD is consistent with motor responses to visual motion stimuli being either too strong or too weak. The ‘change’-related activations pointed at covert (stimulus-driven) attention.  相似文献   

9.
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention.  相似文献   

10.

Background

Human core body temperature is kept quasi-constant regardless of varying thermal environments. It is well known that physiological thermoregulatory systems are under the control of central and peripheral sensory organs that are sensitive to thermal energy. If these systems wrongly respond to non-thermal stimuli, it may disturb human homeostasis.

Methods

Fifteen participants viewed video images evoking hot or cold impressions in a thermally constant environment. Cardiovascular indices were recorded during the experiments. Correlations between the ‘hot-cold’ impression scores and cardiovascular indices were calculated.

Results

The changes of heart rate, cardiac output, and total peripheral resistance were significantly correlated with the ‘hot-cold’ impression scores, and the tendencies were similar to those in actual thermal environments corresponding to the impressions.

Conclusions

The present results suggest that visual information without any thermal energy can affect physiological thermoregulatory systems at least superficially. To avoid such ‘virtual’ environments disturbing human homeostasis, further study and more attention are needed.  相似文献   

11.
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation.  相似文献   

12.

Background

In ecological situations, threatening stimuli often come out from the peripheral vision. Such aggressive messages must trigger rapid attention to the periphery to allow a fast and adapted motor reaction. Several clues converge to hypothesize that peripheral danger presentation can trigger off a fast arousal network potentially independent of the consciousness spot.

Methodology/Principal Findings

In the present MEG study, spatio-temporal dynamics of the neural processing of danger related stimuli were explored as a function of the stimuli position in the visual field. Fearful and neutral faces were briefly presented in the central or peripheral visual field, and were followed by target faces stimuli. An event-related beamformer source analysis model was applied in three time windows following the first face presentations: 80 to 130 ms, 140 to 190 ms, and 210 to 260 ms. The frontal lobe and the right internal temporal lobe part, including the amygdala, reacted as soon as 80 ms of latency to fear occurring in the peripheral vision. For central presentation, fearful faces evoked the classical neuronal activity along the occipito-temporal visual pathway between 140 and 190 ms.

Conclusions

Thus, the high spatio-temporal resolution of MEG allowed disclosing a fast response of a network involving medial temporal and frontal structures in the processing of fear related stimuli occurring unconsciously in the peripheral visual field. Whereas centrally presented stimuli are precisely processed by the ventral occipito-temporal cortex, the related-to-danger stimuli appearing in the peripheral visual field are more efficient to produce a fast automatic alert response possibly conveyed by subcortical structures.  相似文献   

13.

Background

Subjects with Attention-Deficit Hyperactivity Disorder (ADHD) are overdistractible by stimuli out of the intended focus of attention. This control deficit could be due to primarily reduced attentional capacities or, e. g., to overshooting orienting to unexpected events. Here, we aimed at identifying disease-related abnormalities of novelty processing and, therefore, studied event-related potentials (ERP) to respective stimuli in adult ADHD patients compared to healthy subjects.

Methods

Fifteen unmedicated subjects with ADHD and fifteen matched controls engaged in a visual oddball task (OT) under simultaneous EEG recordings. A target stimulus, upon which a motor response was required, and non-target stimuli, which did not demand a specific reaction, were presented in random order. Target and most non-target stimuli were presented repeatedly, but some non-target stimuli occurred only once (‘novels’). These unique stimuli were either ‘relative novels’ with which a meaning could be associated, or ‘complete novels’, if no association was available.

Results

In frontal recordings, a positive component with a peak latency of some 400 ms became maximal after novels. In healthy subjects, this novelty-P3 (or ‘orienting response’) was of higher magnitude after complete than after relative novels, in contrast to the patients with an undifferentially high frontal responsivity. Instead, ADHD patients tended to smaller centro-parietal P3 responses after target signals and, on a behavioural level, responded slower than controls.

Conclusion

The results demonstrate abnormal novelty processing in adult subjects with ADHD. In controls, the ERP pattern indicates that allocation of meaning modulates the processing of new stimuli. However, in ADHD such a modulation was not prevalent. Instead, also familiar, only context-wise new stimuli were treated as complete novels. We propose that disturbed semantic processing of new stimuli resembles a mechanism for excessive orienting to commonly negligible stimuli in ADHD.  相似文献   

14.
Motion stimuli in one visual hemifield activate human primary visual areas of the contralateral side, but suppress activity of the corresponding ipsilateral regions. While hemifield motion is rare in everyday life, motion in both hemifields occurs regularly whenever we move. Consequently, during motion primary visual regions should simultaneously receive excitatory and inhibitory inputs. A comparison of primary and higher visual cortex activations induced by bilateral and unilateral motion stimuli is missing up to now. Many motion studies focused on the MT+ complex in the parieto-occipito-temporal cortex. In single human subjects MT+ has been subdivided in area MT, which was activated by motion stimuli in the contralateral visual field, and area MST, which responded to motion in both the contra- and ipsilateral field. In this study we investigated the cortical activation when excitatory and inhibitory inputs interfere with each other in primary visual regions and we present for the first time group results of the MT+ subregions, allowing for comparisons with the group results of other motion processing studies. Using functional magnetic resonance imaging (fMRI), we investigated whole brain activations in a large group of healthy humans by applying optic flow stimuli in and near the visual field centre and performed a second level analysis. Primary visual areas were activated exclusively by motion in the contralateral field but to our surprise not by central flow fields. Inhibitory inputs to primary visual regions appear to cancel simultaneously occurring excitatory inputs during central flow field stimulation. Within MT+ we identified two subregions. Putative area MST (pMST) was activated by ipsi- and contralateral stimulation and located in the anterior part of MT+. The second subregion was located in the more posterior part of MT+ (putative area MT, pMT).  相似文献   

15.

Objective

To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm.

Method

Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus).

Results

The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants.

Conclusion

The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals.

Significance

Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.  相似文献   

16.
17.
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).  相似文献   

18.
E Scheller  C Büchel  M Gamer 《PloS one》2012,7(7):e41792
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.  相似文献   

19.

Background

Converging evidence from different species indicates that some newborn vertebrates, including humans, have visual predispositions to attend to the head region of animate creatures. It has been claimed that newborn preferences for faces are domain-relevant and similar in different species. One of the most common criticisms of the work supporting domain-relevant face biases in human newborns is that in most studies they already have several hours of visual experience when tested. This issue can be addressed by testing newly hatched face-naïve chicks (Gallus gallus) whose preferences can be assessed prior to any other visual experience with faces.

Methods

In the present study, for the first time, we test the prediction that both newly hatched chicks and human newborns will demonstrate similar preferences for face stimuli over spatial frequency matched structured noise. Chicks and babies were tested using identical stimuli for the two species. Chicks underwent a spontaneous preference task, in which they have to approach one of two stimuli simultaneously presented at the ends of a runway. Human newborns participated in a preferential looking task.

Results and Significance

We observed a significant preference for orienting toward the face stimulus in both species. Further, human newborns spent more time looking at the face stimulus, and chicks preferentially approached and stood near the face-stimulus. These results confirm the view that widely diverging vertebrates possess similar domain-relevant biases toward faces shortly after hatching or birth and provide a behavioural basis for a comparison with neuroimaging studies using similar stimuli.  相似文献   

20.
Spatial interactions between consecutive movements are often attributed to inhibition of return (IOR), a phenomenon in which responses to previously signalled locations are slower than responses to unsignalled locations. In two experiments using peripheral target signals offset by 0°, 90°, or 180°, we show that consecutive saccadic (Experiment 1) and reaching (Experiment 3) responses exhibit a monotonic pattern of reaction times consistent with the currently established spatial distribution of IOR. In contrast, in two experiments with central target signals (i.e., arrowheads pointing at target locations), we find a non-monotonic pattern of reaction times for saccades (Experiment 2) and reaching movements (Experiment 4). The difference in the patterns of results observed demonstrates different behavioral effects that depend on signal type. The pattern of results observed for central stimuli are consistent with a model in which neural adaptation is occurring within motor networks encoding movement direction in a distributed manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号