首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information.

Methodology/Principal Findings

We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity.

Conclusions/Significance

These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.  相似文献   

2.
Harris IM  Dux PE  Benito CT  Leek EC 《PloS one》2008,3(5):e2256

Background

An ongoing debate in the object recognition literature centers on whether the shape representations used in recognition are coded in an orientation-dependent or orientation-invariant manner. In this study, we asked whether the nature of the object representation (orientation-dependent vs orientation-invariant) depends on the information-processing stages tapped by the task.

Methodology/ Findings

We employed a repetition priming paradigm in which briefly presented masked objects (primes) were followed by an upright target object which had to be named as rapidly as possible. The primes were presented for variable durations (ranging from 16 to 350 ms) and in various image-plane orientations (from 0° to 180°, in 30° steps). Significant priming was obtained for prime durations above 70 ms, but not for prime durations of 16 ms and 47 ms, and did not vary as a function of prime orientation. In contrast, naming the same objects that served as primes resulted in orientation-dependent reaction time costs.

Conclusions/Significance

These results suggest that initial processing of object identity is mediated by orientation-independent information and that orientation costs in performance arise when objects are consolidated in visual short-term memory in order to be reported.  相似文献   

3.

Background

In the human visual system, different attributes of an object, such as shape, color, and motion, are processed separately in different areas of the brain. This raises a fundamental question of how are these attributes integrated to produce a unified perception and a specific response. This “binding problem” is computationally difficult because all attributes are assumed to be bound together to form a single object representation. However, there is no firm evidence to confirm that such representations exist for general objects.

Methodology/Principal Findings

Here we propose a paired-attribute model in which cognitive processes are based on multiple representations of paired attributes. In line with the model''s prediction, we found that multiattribute stimuli can produce an illusory perception of a multiattribute object arising from erroneous integration of attribute pairs, implying that object recognition is based on parallel perception of paired attributes. Moreover, in a change-detection task, a feature change in a single attribute frequently caused an illusory perception of change in another attribute, suggesting that multiple pairs of attributes are stored in memory.

Conclusions/Significance

The paired-attribute model can account for some novel illusions and controversial findings on binocular rivalry and short-term memory. Our results suggest that many cognitive processes are performed at the level of paired attributes rather than integrated objects, which greatly facilitates the binding problem and provides simpler solutions for it.  相似文献   

4.

Background

Integration of information streams into a unitary representation is an important task of our cognitive system. Within working memory, the medial temporal lobe (MTL) has been conceptually linked to the maintenance of bound representations. In a previous fMRI study, we have shown that the MTL is indeed more active during working-memory maintenance of spatial associations as compared to non-spatial associations or single items. There are two explanations for this result, the mere presence of the spatial component activates the MTL, or the MTL is recruited to bind associations between neurally non-overlapping representations.

Methodology/Principal Findings

The current fMRI study investigates this issue further by directly comparing intrinsic intra-item binding (object/colour), extrinsic intra-item binding (object/location), and inter-item binding (object/object). The three binding conditions resulted in differential activation of brain regions. Specifically, we show that the MTL is important for establishing extrinsic intra-item associations and inter-item associations, in line with the notion that binding of information processed in different brain regions depends on the MTL.

Conclusions/Significance

Our findings indicate that different forms of working-memory binding rely on specific neural structures. In addition, these results extend previous reports indicating that the MTL is implicated in working-memory maintenance, challenging the classic distinction between short-term and long-term memory systems.  相似文献   

5.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

6.

Background

This study explored whether the high-resolution representations created by visual working memory (VWM) are constructed in a coarse-to-fine or all-or-none manner. The coarse-to-fine hypothesis suggests that coarse information precedes detailed information in entering VWM and that its resolution increases along with the processing time of the memory array, whereas the all-or-none hypothesis claims that either both enter into VWM simultaneously, or neither does.

Methodology/Principal Findings

We tested the two hypotheses by asking participants to remember two or four complex objects. An ERP component, contralateral delay activity (CDA), was used as the neural marker. CDA is higher for four objects than for two objects when coarse information is primarily extracted; yet, this CDA difference vanishes when detailed information is encoded. Experiment 1 manipulated the comparison difficulty of the task under a 500-ms exposure time to determine a condition in which the detailed information was maintained. No CDA difference was found between two and four objects, even in an easy-comparison condition. Thus, Experiment 2 manipulated the memory array’s exposure time under the easy-comparison condition and found a significant CDA difference at 100 ms while replicating Experiment 1′s results at 500 ms. In Experiment 3, the 500-ms memory array was blurred to block the detailed information; this manipulation reestablished a significant CDA difference.

Conclusions/Significance

These findings suggest that the creation of high-resolution representations in VWM is a coarse-to-fine process.  相似文献   

7.

Background

Classic work on visual short-term memory (VSTM) suggests that people store a limited amount of items for subsequent report. However, when human observers are cued to shift attention to one item in VSTM during retention, it seems as if there is a much larger representation, which keeps additional items in a more fragile VSTM store. Thus far, it is not clear whether the capacity of this fragile VSTM store indeed exceeds the traditional capacity limits of VSTM. The current experiments address this issue and explore the capacity, stability, and duration of fragile VSTM representations.

Methodology/Principal Findings

We presented cues in a change-detection task either just after off-set of the memory array (iconic-cue), 1,000 ms after off-set of the memory array (retro-cue) or after on-set of the probe array (post-cue). We observed three stages in visual information processing 1) iconic memory with unlimited capacity, 2) a four seconds lasting fragile VSTM store with a capacity that is at least a factor of two higher than 3) the robust and capacity-limited form of VSTM. Iconic memory seemed to depend on the strength of the positive after-image resulting from the memory display and was virtually absent under conditions of isoluminance or when intervening light masks were presented. This suggests that iconic memory is driven by prolonged retinal activation beyond stimulus duration. Fragile VSTM representations were not affected by light masks, but were completely overwritten by irrelevant pattern masks that spatially overlapped the memory array.

Conclusions/Significance

We find that immediately after a stimulus has disappeared from view, subjects can still access information from iconic memory because they can see an after-image of the display. After that period, human observers can still access a substantial, but somewhat more limited amount of information from a high-capacity, but fragile VSTM that is overwritten when new items are presented to the eyes. What is left after that is the traditional VSTM store, with a limit of about four objects. We conclude that human observers store more sustained representations than is evident from standard change detection tasks and that these representations can be accessed at will.  相似文献   

8.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

9.

Background

Experience can alter how objects are represented in the visual cortex. But experience can take different forms. It is unknown whether the kind of visual experience systematically alters the nature of visual cortical object representations.

Methodology/Principal Findings

We take advantage of different training regimens found to produce qualitatively different types of perceptual expertise behaviorally in order to contrast the neural changes that follow different kinds of visual experience with the same objects. Two groups of participants went through training regimens that required either subordinate-level individuation or basic-level categorization of a set of novel, artificial objects, called “Ziggerins”. fMRI activity of a region in the right fusiform gyrus increased after individuation training and was correlated with the magnitude of configural processing of the Ziggerins observed behaviorally. In contrast, categorization training caused distributed changes, with increased activity in the medial portion of the ventral occipito-temporal cortex relative to more lateral areas.

Conclusions/Significance

Our results demonstrate that the kind of experience with a category of objects can systematically influence how those objects are represented in visual cortex. The demands of prior learning experience therefore appear to be one factor determining the organization of activity patterns in visual cortex.  相似文献   

10.
Hartshorne JK 《PloS one》2008,3(7):e2716

Background

Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated.

Methodology/Principal Findings

Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%.

Conclusions/Significance

This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.  相似文献   

11.

Background

A large body of evidence suggests impaired context processing in schizophrenia. Here we propose that this impairment arises from defective integration of mediotemporal ‘what’ and ‘where’ routes, carrying object and spatial information to the hippocampus.

Methodology and Findings

We have previously shown, in a mediotemporal lobe (MTL) model, that the abnormal connectivity between MTL regions observed in schizophrenia can explain the episodic memory deficits associated with the disorder. Here we show that the same neuropathology leads to several context processing deficits observed in patients with schizophrenia: 1) failure to choose subordinate stimuli over dominant ones when the former fit the context, 2) decreased contextual constraints in memory retrieval, as reflected in increased false alarm rates and 3) impaired retrieval of contextual information in source monitoring. Model analyses show that these deficits occur because the ‘schizophrenic MTL’ forms fragmented episodic representations, in which objects are overrepresented at the expense of spatial contextual information.

Conclusions and Significance

These findings highlight the importance of MTL neuropathology in schizophrenia, demonstrating that it may underlie a broad spectrum of deficits, including context processing and memory impairments. It is argued that these processing deficits may contribute to central schizophrenia symptoms such as contextually inappropriate behavior, associative abnormalities, conversational drift, concreteness and delusions.  相似文献   

12.

Background

Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered.

Method

Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects.

Results

Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group.

Conclusions

deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.  相似文献   

13.
Gao Z  Li J  Yin J  Shen M 《PloS one》2010,5(12):e14273

Background

The processing mechanisms of visual working memory (VWM) have been extensively explored in the recent decade. However, how the perceptual information is extracted into VWM remains largely unclear. The current study investigated this issue by testing whether the perceptual information was extracted into VWM via an integrated-object manner so that all the irrelevant information would be extracted (object hypothesis), or via a feature-based manner so that only the target-relevant information would be extracted (feature hypothesis), or via an analogous processing manner as that in visual perception (analogy hypothesis).

Methodology/Principal Findings

High-discriminable information which is processed at the parallel stage of visual perception and fine-grained information which is processed via focal attention were selected as the representatives of perceptual information. The analogy hypothesis predicted that whereas high-discriminable information is extracted into VWM automatically, fine-grained information will be extracted only if it is task-relevant. By manipulating the information type of the irrelevant dimension in a change-detection task, we found that the performance was affected and the ERP component N270 was enhanced if a change between the probe and the memorized stimulus consisted of irrelevant high-discriminable information, but not if it consisted of irrelevant fine-grained information.

Conclusions/Significance

We conclude that dissociated extraction mechanisms exist in VWM for information resolved via dissociated processes in visual perception (at least for the information tested in the current study), supporting the analogy hypothesis.  相似文献   

14.

Background

Since the pioneering study by Rosch and colleagues in the 70s, it is commonly agreed that basic level perceptual categories (dog, chair…) are accessed faster than superordinate ones (animal, furniture…). Nevertheless, the speed at which objects presented in natural images can be processed in a rapid go/no-go visual superordinate categorization task has challenged this “basic level advantage”.

Principal Findings

Using the same task, we compared human processing speed when categorizing natural scenes as containing either an animal (superordinate level), or a specific animal (bird or dog, basic level). Human subjects require an additional 40–65 ms to decide whether an animal is a bird or a dog and most errors are induced by non-target animals. Indeed, processing time is tightly linked with the type of non-targets objects. Without any exemplar of the same superordinate category to ignore, the basic level category is accessed as fast as the superordinate category, whereas the presence of animal non-targets induces both an increase in reaction time and a decrease in accuracy.

Conclusions and Significance

These results support the parallel distributed processing theory (PDP) and might reconciliate controversial studies recently published. The visual system can quickly access a coarse/abstract visual representation that allows fast decision for superordinate categorization of objects but additional time-consuming visual analysis would be necessary for a decision at the basic level based on more detailed representations.  相似文献   

15.
16.

Background

Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces.

Methodology/Principal Findings

In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches.

Conclusions/Significance

A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals which treat localized junction configurations as 2D image features, we link them to mechanisms of apparent surface segregation. As a consequence, we demonstrate how junctions can change their perceptual representation depending on the scene context and the spatial configuration of boundary fragments.  相似文献   

17.
18.

Background

Because pain often signals the occurrence of potential tissue damage, a nociceptive stimulus has the capacity to involuntarily capture attention and take priority over other sensory inputs. Whether distraction by nociception actually occurs may depend upon the cognitive characteristics of the ongoing activities. The present study tested the role of working memory in controlling the attentional capture by nociception.

Methodology and Principal Findings

Participants performed visual discrimination and matching tasks in which visual targets were shortly preceded by a tactile distracter. The two tasks were chosen because of the different effects the involvement of working memory produces on performance, in order to dissociate the specific role of working memory in the control of attention from the effect of general resource demands. Occasionally (i.e. 17% of the trials), tactile distracters were replaced by a novel nociceptive stimulus in order to distract participants from the visual tasks. Indeed, in the control conditions (no working memory), reaction times to visual targets were increased when the target was preceded by a novel nociceptive distracter as compared to the target preceded by a frequent tactile distracter, suggesting attentional capture by the novel nociceptive stimulus. However, when the task required an active rehearsal of the visual target in working memory, the novel nociceptive stimulus no longer induced a lengthening of reaction times to visual targets, indicating a reduction of the distraction produced by the novel nociceptive stimulus. This effect was independent of the overall task demands.

Conclusion and Significance

Loading working memory with pain-unrelated information may reduce the ability of nociceptive input to involuntarily capture attention, and shields cognitive processing from nociceptive distraction. An efficient control of attention over pain is best guaranteed by the ability to maintain active goal priorities during achievement of cognitive activities and to keep pain-related information out of task settings.  相似文献   

19.
It is clear that humans have mental representations of their spatial environments and that these representations are useful, if not essential, in a wide variety of cognitive tasks such as identification of landmarks and objects, guiding actions and navigation and in directing spatial awareness and attention. Determining the properties of mental representation has long been a contentious issue (see Pinker, 1984). One method of probing the nature of human representation is by studying the extent to which representation can surpass or go beyond the visual (or sensory) experience from which it derives. From a strictly empiricist standpoint what is not sensed cannot be represented; except as a combination of things that have been experienced. But perceptual experience is always limited by our view of the world and the properties of our visual system. It is therefore not surprising when human representation is found to be highly dependent on the initial viewpoint of the observer and on any shortcomings thereof. However, representation is not a static entity; it evolves with experience. The debate as to whether human representation of objects is view-dependent or view-invariant that has dominated research journals recently may simply be a discussion concerning how much information is available in the retinal image during experimental tests and whether this information is sufficient for the task at hand. Here we review an approach to the study of the development of human spatial representation under realistic problem solving scenarios. This is facilitated by the use of realistic virtual environments, exploratory learning and redundancy in visual detail.  相似文献   

20.

Background

A recent modeling study by the authors predicted that contextual information is poorly integrated into episodic representations in schizophrenia, and that this is a main cause of the retrieval deficits seen in schizophrenia.

Methodology/Principal Findings

We have tested this prediction in patients with first-episode schizophrenia and matched controls. The benefit from contextual cues in retrieval was strongly reduced in patients. On the other hand, retrieval based on item cues was spared.

Conclusions/Significance

These results suggest that reduced integration of context information into episodic representations is a core deficit in schizophrenia and one of the main causes of episodic memory impairment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号