首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Cross-modal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.  相似文献   

2.
Poghosyan V  Ioannides AA 《Neuron》2008,58(5):802-813
A fundamental question about the neural correlates of attention concerns the earliest sensory processing stage that it can affect. We addressed this issue by recording magnetoencephalography (MEG) signals while subjects performed detection tasks, which required employment of spatial or nonspatial attention, in auditory or visual modality. Using distributed source analysis of MEG signals, we found that, contrary to previous studies that used equivalent current dipole (ECD) analysis, spatial attention enhanced the initial feedforward response in the primary visual cortex (V1) at 55-90 ms. We also found attentional modulation of the putative primary auditory cortex (A1) activity at 30-50 ms. Furthermore, we reproduced our findings using ECD modeling guided by the results of distributed source analysis and suggest a reason why earlier studies using ECD analysis failed to identify the modulation of earliest V1 activity.  相似文献   

3.
Attention is intrinsic to our perceptual representations of sensory inputs. Best characterized in the visual domain, it is typically depicted as a spotlight moving over a saliency map that topographically encodes strengths of visual features and feedback modulations over the visual scene. By introducing smells to two well-established attentional paradigms, the dot-probe and the visual-search paradigms, we find that a smell reflexively directs attention to the congruent visual image and facilitates visual search of that image without the mediation of visual imagery. Furthermore, such effect is independent of, and can override, top-down bias. We thus propose that smell quality acts as an object feature whose presence enhances the perceptual saliency of that object, thereby guiding the spotlight of visual attention. Our discoveries provide robust empirical evidence for a multimodal saliency map that weighs not only visual but also olfactory inputs.  相似文献   

4.
Previous work has demonstrated that upcoming saccades influence visual and auditory performance even for stimuli presented before the saccade is executed. These studies suggest a close relationship between saccade generation and visual/auditory attention. Furthermore, they provide support for Rizzolatti et al.'s premotor model of attention, which suggests that the same circuits involved in motor programming are also responsible for shifts in covert orienting (shifting attention without moving the eyes or changing posture). In a series of experiments, we demonstrate that saccade programming also affects tactile perception. Participants made speeded saccades to the left and right side as well as tactile discriminations of up versus down. The first experiment demonstrates that participants were reliably faster at responding to tactile stimuli near the location of upcoming saccades. In our second experiment, we had the subjects cross their hands and demonstrated that the effect occurs in visual space (rather than the early representations of touch). In our third experiment, the tactile events usually occurred on the opposite side of upcoming eye movement. We found that the benefit at the saccade target location vanished, suggesting that this shift is not obligatory but that it may be vetoed on the basis of expectation.  相似文献   

5.
When confronted with complex visual scenes in daily life, how do we know which visual information represents our own hand? We investigated the cues used to assign visual information to one''s own hand. Wrist tendon vibration elicits an illusory sensation of wrist movement. The intensity of this illusion attenuates when the actual motionless hand is visually presented. Testing what kind of visual stimuli attenuate this illusion will elucidate factors contributing to visual detection of one''s own hand. The illusion was reduced when a stationary object was shown, but only when participants knew it was controllable with their hands. In contrast, the visual image of their own hand attenuated the illusion even when participants knew that it was not controllable. We suggest that long-term knowledge about the appearance of the body and short-term knowledge about controllability of a visual object are combined to robustly extract our own body from a visual scene.  相似文献   

6.
Is our visual experience of the world graded or dichotomous? Opposite pre-theoretical intuitions apply in different cases. For instance, when looking at a scene, one has a distinct sense that our experience has a graded character: one cannot say that there is no experience of contents that fall outside the focus of attention, but one cannot say that there is full awareness of such contents either. By contrast, when performing a visual detection task, our sense of having perceived the stimulus or not exhibits a more dichotomous character. Such issues have recently been the object of intense debate because different theoretical frameworks make different predictions about the graded versus dichotomous character of consciousness. Here, we review both relevant empirical findings as well as the associated theories (i.e. local recurrent processing versus global neural workspace theory). Next, we attempt to reconcile such contradictory theories by suggesting that level of processing is an often-ignored but highly relevant dimension through which we can cast a novel look at existing empirical findings. Thus, using a range of different stimuli, tasks and subjective scales, we show that processing low-level, non-semantic content results in graded visual experience, whereas processing high-level semantic content is experienced in a more dichotomous manner. We close by comparing our perspective with existing proposals, focusing in particular on the partial awareness hypothesis.  相似文献   

7.
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.  相似文献   

8.
Previous studies in monkeys and humans have revealed neural correlates and perceptual consequences of feature-based attention. In this issue of Neuron, two brain-imaging studies from Serences and Boynton and Liu et al. bridge the gap between single neurons and behavior by demonstrating a highly functional attention system that acts on neural representations of our visual world enhancing the processing of the currently attended set of features at the expense of information about less relevant aspects.  相似文献   

9.
1. Neurotrophins are very good candidates which relate electrical activity to molecular changes in activity-dependent phenomena. They exert their action through binding to specific tyrosine-kinase receptors: Trk receptors. It is important to consider Trk distribution in order to understand better the role of neurotrophins in the Central Nervous System (CNS). We focused our attention on brain-derived neurotrophic factor (BDNF) Trk receptors (TrkB) during development of the rat visual cortex, since this neurotrophin has been shown to play an important role in visual system development and plasticity.2. We investigated the full length form of TrkB receptors considering both its total amount and its cellular distribution. To address this issue we used an antibody that recognizes the full length form of TrkB and we used it both in Western blot and immunohistochemistry.3. We found that the expression of TrkB receptor increases during development, but that there is no effect on visual experience, since dark-reared animals show the same protein level and pattern of TrkB expression compared to age-matched, normally reared controls.  相似文献   

10.
We appear to be unaware of large changes in our visual scene if our attention is temporarily diverted. This suggests that the rich, complete visual scene that we appear to have may be just an illusion.  相似文献   

11.
Attention is crucial for visual perception because it allows the visual system to effectively use its limited resources by selecting behaviorally and cognitively relevant stimuli from the large amount of information impinging on the eyes. Reflexive, stimulus-driven attention is essential for successful interactions with the environment because it can, for example, speed up responses to life-threatening events. It is commonly believed that exogenous attention operates in the retinotopic coordinates of the early visual system. Here, using a novel experimental paradigm [1], we show that a nonretinotopic cue improves both accuracy and reaction times in a visual search task. Furthermore, the influence of the cue is limited both in space and time, a characteristic typical of exogenous cueing. These and other recent findings show that many more aspects of vision are processed nonretinotopically than previously thought.  相似文献   

12.
Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex.  相似文献   

13.
Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity.  相似文献   

14.
Where we allocate our visual spatial attention depends upon a continual competition between internally generated goals and external distractions. Recently it was shown that single neurons in the macaque lateral intraparietal area (LIP) can predict the amount of time a distractor can shift the locus of spatial attention away from a goal. We propose that this remarkable dynamical correspondence between single neurons and attention can be explained by a network model in which generically high-dimensional firing-rate vectors rapidly decay to a single mode. We find direct experimental evidence for this model, not only in the original attentional task, but also in a very different task involving perceptual decision making. These results confirm a theoretical prediction that slowly varying activity patterns are proportional to spontaneous activity, pose constraints on models of persistent activity, and suggest a network mechanism for the emergence of robust behavioral timing from heterogeneous neuronal populations.  相似文献   

15.
Visual performance and visual interactions in pelagic animals are notoriously hard to investigate because of our restricted access to the habitat. The pelagic visual world is also dramatically different from benthic or terrestrial habitats, and our intuition is less helpful in understanding vision in unfamiliar environments. Here, we develop a computational approach to investigate visual ecology in the pelagic realm. Using information on eye size, key retinal properties, optical properties of the water and radiance, we develop expressions for calculating the visual range for detection of important types of pelagic targets. We also briefly apply the computations to a number of central questions in pelagic visual ecology, such as the relationship between eye size and visual performance, the maximum depth at which daylight is useful for vision, visual range relations between prey and predators, counter-illumination and the importance of various aspects of retinal physiology. We also argue that our present addition to computational visual ecology can be developed further, and that a computational approach offers plenty of unused potential for investigations of visual ecology in both aquatic and terrestrial habitats.  相似文献   

16.
David SV  Hayden BY  Mazer JA  Gallant JL 《Neuron》2008,59(3):509-521
Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.  相似文献   

17.
Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.  相似文献   

18.
Our everyday conscious experience of the visual world is fundamentally shaped by the interaction of overt visual attention and object awareness. Although the principal impact of both components is undisputed, it is still unclear how they interact. Here we recorded eye-movements preceding and following conscious object recognition, collected during the free inspection of ambiguous and corresponding unambiguous stimuli. Using this paradigm, we demonstrate that fixations recorded prior to object awareness predict the later recognized object identity, and that subjects accumulate more evidence that is consistent with their later percept than for the alternative. The timing of reached awareness was verified by a reaction-time based correction method and also based on changes in pupil dilation. Control experiments, in which we manipulated the initial locus of visual attention, confirm a causal influence of overt attention on the subsequent result of object perception. The current study thus demonstrates that distinct patterns of overt attentional selection precede object awareness and thereby directly builds on recent electrophysiological findings suggesting two distinct neuronal mechanisms underlying the two phenomena. Our results emphasize the crucial importance of overt visual attention in the formation of our conscious experience of the visual world.  相似文献   

19.
人脑每时每刻都要接收大量视觉信息,由于人脑加工信息的能力有限,所以在较大视野内将注意分配给相关信息,同时抑制引起注意分散的不相关信息,对执行目标导向的行为至关重要。这种对视觉信息的选择性和主动性加工以适应当前目标的过程被称作视觉注意(visual attention),且视觉注意可分为自上而下的注意与自下而上的注意两种不同功能。由于来自大脑电信号的神经振荡活动在认知加工中发挥重要作用,已有研究综述了视觉注意与神经振荡(neural oscillation)的密切关系,但并未涉及不同的注意功能与神经振荡的关系。本文系统性调查了不同注意功能与神经振荡的关系,发现额-顶区域的theta频带振荡活动反映了自上而下的认知控制,而后部脑区的theta振荡与自下而上的注意相关。顶-枕区域alpha振荡的偏侧化有助于注意分配,而alpha频带的大规模同步促成了注意对视皮层自上而下的影响。Beta振荡介导了自上而下的信息与自下而上的信息之间的互动,作为信息载体促进了视觉信息处理。Gamma振荡则可能与自上而下和自下而上的注意间整合相关。本文就视觉注意功能与神经振荡关系的研究现状展开综述,旨在揭示不同的神经振荡活动在特定的视觉注意功能中的作用。  相似文献   

20.
Our nervous system is confronted with a barrage of sensory stimuli, but neural resources are limited and not all stimuli can be processed to the same extent. Mechanisms exist to bias attention toward the particularly salient events, thereby providing a weighted representation of our environment. Our understanding of these mechanisms is still limited, but theoretical models can replicate such a weighting of sensory inputs and provide a basis for understanding the underlying principles. Here, we describe such a model for the auditory system-an auditory saliency map. We experimentally validate the model on natural acoustical scenarios, demonstrating that it reproduces human judgments of auditory saliency and predicts the detectability of salient sounds embedded in noisy backgrounds. In addition, it also predicts the natural orienting behavior of naive macaque monkeys to the same salient stimuli. The structure of the suggested model is identical to that of successfully used visual saliency maps. Hence, we conclude that saliency is determined either by implementing similar mechanisms in different unisensory pathways or by the same mechanism in multisensory areas. In any case, our results demonstrate that different primate sensory systems rely on common principles for extracting relevant sensory events.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号