首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The visual brain consists of many different visual areas, which are functionally specialized to process and perceive different attributes of the visual scene. However, the time taken to process different attributes varies; consequently, we see some attributes before others. It follows that there is a perceptual asynchrony and hierarchy in visual perception. Because perceiving an attribute is tantamount to becoming conscious of it, it follows that we become conscious of different attributes at different times. Visual consciousness is therefore distributed in time. Given that we become conscious of different visual attributes because of activity at different, functionally specialized, areas of the visual brain, it follows that visual consciousness is also distributed in space. Therefore, visual consciousness is not a single unified entity, but consists of many microconsciousnesses.  相似文献   

2.
The fruit fly Drosophila melanogaster has a sophisticated visual system and exhibits complex visual behaviors. Visual responses, vision processing and higher cognitive processes in Drosophila have been studied extensively. However, little is known about whether the retinal location of visual stimuli can affect fruit fly performance in various visual tasks. We tested the response of wild-type Berlin flies to visual stimuli at several vertical locations. Three paradigms were used in our study: visual operant conditioning, visual object fixation and optomotor response. We observed an acute zone for visual feature memorization in the upper visual field when visual patterns were presented with a black background. However, when a white background was used, the acute zone was in the lower visual field. Similar to visual feature memorization, the best locations for visual object fixation and optomotor response to a single moving stripe were in the lower visual field with a white background and the upper visual field with a black background. The preferred location for the optomotor response to moving gratings was around the equator of the visual field. Our results suggest that different visual processing pathways are involved in different visual tasks and that there is a certain degree of overlap between the pathways for visual feature memorization, visual object fixation and optomotor response.  相似文献   

3.
The proximity of visual landmarks impacts reaching performance   总被引:3,自引:0,他引:3  
The control of goal-directed reaching movements is thought to rely upon egocentric visual information derived from the visuomotor networks of the dorsal visual pathway. However, recent research (Krigolson and Heath, 2004) suggests it is also possible to make allocentric comparisons between a visual background and a target object to facilitate reaching accuracy. Here we sought to determine if the effectiveness of these allocentric comparisons is reduced as distance between a visual background and a target object increases. To accomplish this, participants completed memory-guided reaching movements to targets presented in an otherwise empty visual background or positioned within a proximal, medial, or distal visual background. Our results indicated that the availability of a proximal or medial visual background reduced endpoint variability relative to reaches made without a visual background. Interestingly, we found that endpoint variability was not reduced when participants reached to targets framed within a distal visual background. Such findings suggest that allocentric visual information is used to facilitate reaching performance; however, the fidelity by which such cues are used appears linked to the proximity of veridical target location. Importantly, these data also suggest that information from both the dorsal and ventral visual streams can be integrated to facilitate the online control of reaching movements.  相似文献   

4.
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex.  相似文献   

5.
A number of recent studies have demonstrated superior visual processing when the information is distributed across the left and right visual fields than if the information is presented in a single hemifield (the bilateral field advantage). This effect is thought to reflect independent attentional resources in the two hemifields and the capacity of the neural responses to the left and right hemifields to process visual information in parallel. Here, we examined whether a bilateral field advantage can also be observed in a high-level visual task that requires the information from both hemifields to be combined. To this end, we used a visual enumeration task--a task that requires the assimilation of separate visual items into a single quantity--where the to-be-enumerated items were either presented in one hemifield or distributed between the two visual fields. We found that enumerating large number (>4 items), but not small number (<4 items), exhibited the bilateral field advantage: enumeration was more accurate when the visual items were split between the left and right hemifields than when they were all presented within the same hemifield. Control experiments further showed that this effect could not be attributed to a horizontal alignment advantage of the items in the visual field, or to a retinal stimulation difference between the unilateral and bilateral displays. These results suggest that a bilateral field advantage can arise when the visual task involves inter-hemispheric integration. This is in line with previous research and theory indicating that, when the visual task is attentionally demanding, parallel processing by the neural responses to the left and right hemifields can expand the capacity of visual information processing.  相似文献   

6.
Attention is intrinsic to our perceptual representations of sensory inputs. Best characterized in the visual domain, it is typically depicted as a spotlight moving over a saliency map that topographically encodes strengths of visual features and feedback modulations over the visual scene. By introducing smells to two well-established attentional paradigms, the dot-probe and the visual-search paradigms, we find that a smell reflexively directs attention to the congruent visual image and facilitates visual search of that image without the mediation of visual imagery. Furthermore, such effect is independent of, and can override, top-down bias. We thus propose that smell quality acts as an object feature whose presence enhances the perceptual saliency of that object, thereby guiding the spotlight of visual attention. Our discoveries provide robust empirical evidence for a multimodal saliency map that weighs not only visual but also olfactory inputs.  相似文献   

7.
There is evidence that visual stimuli used to signal drug delivery in self-administration procedures have primary reinforcing properties, and that drugs of abuse enhance the reinforcing properties of such stimuli. Here, we explored the relationships between locomotor activity, responding for a visual stimulus, and self-administration of methamphetamine (METH). Rats were classified as high or low responders based on activity levels in a novel locomotor chamber and were subsequently tested for responding to produce a visual stimulus followed by self-administration of a low dose of METH (0.025 mg/kg/infusion) paired with the visual stimulus. High responder rats responded more for the visual stimulus than low responder rats indicating that the visual stimulus was reinforcing and that operant responding for a visual stimulus has commonalities with locomotor activity in a novel environment. Similarly, high responder rats responded more for METH paired with a visual stimulus than low responder rats. Because of the reinforcing properties of the visual stimulus, it was not possible to determine if the rats were responding to produce the visual stimulus, METH or the combination. We speculate that responding to produce sensory reinforcers may be a measure of sensation seeking. These results indicate that visual stimuli have unconditioned reinforcing effects which may have a significant role in acquisition of drug self-administration, a role that is not yet well understood.  相似文献   

8.
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children''s reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.  相似文献   

9.
The persistences of vision   总被引:2,自引:0,他引:2  
Human observers continue to experience a visual stimulus for some time after the offset that stimulus. The neural activity evoked by a visual stimulus continues for some time after its offset. The information extracted from a visual stimulus continues to be registered in a visual form of memory ('iconic memory') for some time after its offset. We may thus distinguish three distinct senses in which a visual stimulus may be said to persist after its physical offset: there is phenomenological persistence, neural persistence and informational persistence. Various assumptions have been made about the relation between these forms of visual persistence. The most frequent assumption is that they correspond simply to three different methods for studying a single entity. Detailed consideration of what is known about the properties of these three forms of persistence suggests, however, that this assumption is not correct. It can reasonably be proposed that visible persistence is the phenomenological correlate of neural persistence occurring at various stages of the visual system: photoreceptors, ganglion cells and the stereopsis system. Iconic memory on the other hand, does not correspond to visible persistence, nor to neural persistence in any stage of the visual system. Recent work, in fact, suggests that iconic memory is a property of some relatively late stage in the visual information-processing system, rather than being a peripheral sensory buffer store. This suggestion raises some fundamental theoretical issues concerning the psychology of visual perception, issues with which cognitive psychology has yet to come to grips.  相似文献   

10.
Cone visual pigments   总被引:1,自引:0,他引:1  
Cone visual pigments are visual opsins that are present in vertebrate cone photoreceptor cells and act as photoreceptor molecules responsible for photopic vision. Like the rod visual pigment rhodopsin, which is responsible for scotopic vision, cone visual pigments contain the chromophore 11-cis-retinal, which undergoes cis–trans isomerization resulting in the induction of conformational changes of the protein moiety to form a G protein-activating state. There are multiple types of cone visual pigments with different absorption maxima, which are the molecular basis of color discrimination in animals. Cone visual pigments form a phylogenetic sister group with non-visual opsin groups such as pinopsin, VA opsin, parapinopsin and parietopsin groups. Cone visual pigments diverged into four groups with different absorption maxima, and the rhodopsin group diverged from one of the four groups of cone visual pigments. The photochemical behavior of cone visual pigments is similar to that of pinopsin but considerably different from those of other non-visual opsins. G protein activation efficiency of cone visual pigments is also comparable to that of pinopsin but higher than that of the other non-visual opsins. Recent measurements with sufficient time-resolution demonstrated that G protein activation efficiency of cone visual pigments is lower than that of rhodopsin, which is one of the molecular bases for the lower amplification of cones compared to rods. In this review, the uniqueness of cone visual pigments is shown by comparison of their molecular properties with those of non-visual opsins and rhodopsin. This article is part of a Special Issue entitled: Retinal Proteins — You can teach an old dog new tricks.  相似文献   

11.
Keller GB  Bonhoeffer T  Hübener M 《Neuron》2012,74(5):809-815
Studies in anesthetized animals have suggested that activity in early visual cortex is mainly driven by visual input and is well described by a feedforward processing hierarchy. However, evidence from experiments on awake animals has shown that both eye movements and behavioral state can strongly modulate responses of neurons in visual cortex; the functional significance of this modulation, however, remains elusive. Using visual-flow feedback manipulations during locomotion in a virtual reality environment, we found that responses in layer 2/3 of mouse primary visual cortex are strongly driven by locomotion and by mismatch between actual and expected visual feedback. These data suggest that processing in visual cortex may be based on predictive coding strategies that use motor-related and visual input to detect mismatches between predicted and actual visual feedback.  相似文献   

12.
Haynes JD  Driver J  Rees G 《Neuron》2005,46(5):811-821
Identifying the neural basis of visibility is central to understanding conscious visual perception. Visibility of basic features such as brightness is often thought to reflect activity in just early visual cortex. But here we show under metacontrast masking that fMRI activity in stimulus-driven areas of early visual cortex did not reflect parametric changes in the visibility of a brightness stimulus. The psychometric visibility function was instead correlated with activity in later visual regions plus parieto-frontal areas, and surprisingly, in representations of the unstimulated stimulus surround for primary visual cortex. Critically, decreased stimulus visibility was associated with a regionally-specific decoupling between early visual cortex and higher visual areas. This provides evidence that dynamic changes in effective connectivity can closely reflect visual perception.  相似文献   

13.
It has been suggested that attentional resolution is greater in the lower than in the upper visual field. As there is no corresponding asymmetry between the areas in the primary visual cortex where the input from upper and lower visual fields is processed, an 'attentional filter' has been proposed to act in one or more higher visual cortical areas in order to constrict the availability of visual information to the level of awareness. To investigate this, a visual search array was presented to the entire visual field and reaction times from upper and lower visual fields compared. In a second experiment, subjects were trained in detecting targets in different visual fields. There was no significant difference between reaction times for targets presented in either upper or lower visual fields when the array was presented to the entire visual field. However, when the array was restricted to either the upper or lower visual fields, reaction times were significantly slower for detection in the upper visual field.  相似文献   

14.
Feedback contributions to visual awareness in human occipital cortex   总被引:5,自引:0,他引:5  
It has traditionally been assumed that processing within the visual system proceeds in a bottom-up, feedforward manner from retina to higher cortical areas. In addition to feedforward processing, it is now clear that there are also important contributions to sensory encoding that rely upon top-down, feedback (reentrant) projections from higher visual areas to lower ones. By utilizing transcranial magnetic stimulation (TMS) in a metacontrast masking paradigm, we addressed whether feedback processes in early visual cortex play a role in visual awareness. We show that TMS of visual cortex, when timed to produce visual suppression of an annulus serving as a metacontrast mask, induces recovery of an otherwise imperceptible disk. In addition to producing disk recovery, TMS suppression of an annulus was greater when a disk preceded it than when an annulus was presented alone. This latter result suggests that there are effects of the disk on the perceptibility of the subsequent mask that are additive and are revealed with TMS of the visual cortex. These results demonstrate spatial and temporal interactions of conscious vision in visual cortex and suggest that a prior visual stimulus can influence subsequent perception at early stages of visual encoding via feedback projections.  相似文献   

15.
Visual attention: the where,what, how and why of saliency   总被引:6,自引:0,他引:6  
Attention influences the processing of visual information even in the earliest areas of primate visual cortex. There is converging evidence that the interaction of bottom-up sensory information and top-down attentional influences creates an integrated saliency map, that is, a topographic representation of relative stimulus strength and behavioral relevance across visual space. This map appears to be distributed across areas of the visual cortex, and is closely linked to the oculomotor system that controls eye movements and orients the gaze to locations in the visual scene characterized by a high salience.  相似文献   

16.
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.  相似文献   

17.
Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.  相似文献   

18.
How our vision remains stable in spite of the interruptions produced by saccadic eye movements has been a repeatedly revisited perceptual puzzle. The major hypothesis is that a corollary discharge (CD) or efference copy signal provides information that the eye has moved, and this information is used to compensate for the motion. There has been progress in the search for neuronal correlates of such a CD in the monkey brain, the best animal model of the human visual system. In this article, we briefly summarize the evidence for a CD pathway to frontal cortex, and then consider four questions on the relation of neuronal mechanisms in the monkey brain to stable visual perception. First, how can we determine whether the neuronal activity is related to stable visual perception? Second, is the activity a possible neuronal correlate of the proposed transsaccadic memory hypothesis of visual stability? Third, are the neuronal mechanisms modified by visual attention and does our perceived visual stability actually result from neuronal mechanisms related primarily to the central visual field? Fourth, does the pathway from superior colliculus through the pulvinar nucleus to visual cortex contribute to visual stability through suppression of the visual blur produced by saccades?  相似文献   

19.
Multisensory integration is a common feature of the mammalian brain that allows it to deal more efficiently with the ambiguity of sensory input by combining complementary signals from several sensory sources. Growing evidence suggests that multisensory interactions can occur as early as primary sensory cortices. Here we present incompatible visual signals (orthogonal gratings) to each eye to create visual competition between monocular inputs in primary visual cortex where binocular combination would normally take place. The incompatibility prevents binocular fusion and triggers an ambiguous perceptual response in which the two images are perceived one at a time in an irregular alternation. One key function of multisensory integration is to minimize perceptual ambiguity by exploiting cross-sensory congruence. We show that a haptic signal matching one of the visual alternatives helps disambiguate visual perception during binocular rivalry by both prolonging the dominance period of the congruent visual stimulus and by shortening its suppression period. Importantly, this interaction is strictly tuned for orientation, with a mismatch as small as 7.5° between visual and haptic orientations sufficient to annul the interaction. These results indicate important conclusions: first, that vision and touch interact at early levels of visual processing where interocular conflicts are first detected and orientation tunings are narrow, and second, that haptic input can influence visual signals outside of visual awareness, bringing a stimulus made invisible by binocular rivalry suppression back to awareness sooner than would occur without congruent haptic input.  相似文献   

20.
Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号