首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Recently, it has been demonstrated that objects held in working memory can influence rapid oculomotor selection. This has been taken as evidence that perceptual salience can be modified by active working memory representations. The goal of the present study was to examine whether these results could also be caused by feature-based priming. In two experiments, participants were asked to saccade to a target line segment of a certain orientation that was presented together with a to-be-ignored distractor. Both objects were given a task-irrelevant color that varied per trial. In a secondary task, a color had to be memorized, and that color could either match the color of the target, match the color of the distractor, or it did not match the color of any of the objects in the search task. The memory task was completed either after the search task (Experiment 1), or before it (Experiment 2). The results showed that in both experiments the memorized color biased oculomotor selection. Eye movements were more frequently drawn towards objects that matched the memorized color, irrespective of whether the memory task was completed after (Experiment 1) or before (Experiment 2) the search task. This bias was particularly prevalent in short-latency saccades. The results show that early oculomotor selection performance is not only affected by properties that are actively maintained in working memory but also by those previously memorized. Both working memory and feature priming can cause early biases in oculomotor selection.  相似文献   

2.
The present study investigated the neural processes underlying “same” and -“different” judgments for two simultaneously presented objects, that varied on one or both, of two dimensions: color and shape. Participants judged whether or not the two objects were “same” or “different” on either the color dimension (color task) or the shape dimension (shape task). The unattended irrelevant dimension of the objects was either congruent (same-same; different-different) or incongruent (same-different). ERP data showed a main effect of color congruency in the time window 190–260 ms post-stimulus presentation and a main effect of shape congruency in the time window 220–280 ms post-stimulus presentation in both color and shape tasks. The interaction between color and shape congruency in the ERP data occurred in a later time window than the two main effects, indicating that mismatches in task-relevant and task-irrelevant dimensions were processed automatically and independently before a response was selected. The fact that the interference of the task-irrelevant dimension occurred after mismatch detection, supports a confluence model of processing.  相似文献   

3.
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations.  相似文献   

4.
Men and women differ in their ability to solve spatial problems. There are two possible proximate explanations for this: (i) men and women differ in the kind (and value) of information they use and/or (ii) their cognitive abilities differ with respect to spatial problems. Using a simple computerized task which could be solved either by choosing an object based on what it looked like, or by its location, we found that the women relied on the object's visual features to solve the task, while the men used both visual and location information. There were no differences between the sexes in memory for the visual features of the objects, but women were poorer than men at remembering the locations of objects.  相似文献   

5.
While sensory processes are tuned to particular features, such as an object''s specific location, color or orientation, visual working memory (vWM) is assumed to store information using representations, which generalize over a feature dimension. Additionally, current vWM models presume that different features or objects are stored independently. On the other hand, configurational effects, when observed, are supposed to mainly reflect encoding strategies. We show that the location of the target, relative to the display center and boundaries, and overall memory load influenced recall precision, indicating that, like sensory processes, capacity limited vWM resources are spatially tuned. When recalling one of three memory items the target distance from the display center was overestimated, similar to the error when only one item was memorized, but its distance from the memory items'' average position was underestimated, showing that not only individual memory items'' position, but also the global configuration of the memory array may be stored. Finally, presenting the non-target items at recall, consequently providing landmarks and configurational information, improved precision and accuracy of target recall. Similarly, when the non-target items were translated at recall, relative to their position in the initial display, a parallel displacement of the recalled target was observed. These findings suggest that fine-grained spatial information in vWM is represented in local maps whose resolution varies with distance from landmarks, such as the display center, while coarse representations are used to store the memory array configuration. Both these representations are updated at the time of recall.  相似文献   

6.
Selecting and remembering visual information is an active and competitive process. In natural environments, representations are tightly coupled to task. Objects that are task-relevant are remembered better due to a combination of increased selection for fixation and strategic control of encoding and/or retaining viewed information. However, it is not understood how physically manipulating objects when performing a natural task influences priorities for selection and memory. In this study, we compare priorities for selection and memory when actively engaged in a natural task with first-person observation of the same object manipulations. Results suggest that active manipulation of a task-relevant object results in a specific prioritization for object position information compared with other properties and compared with action observation of the same manipulations. Experiment 2 confirms that this spatial prioritization is likely to arise from manipulation rather than differences in spatial representation in real environments and the movies used for action observation. Thus, our findings imply that physical manipulation of task relevant objects results in a specific prioritization of spatial information about task-relevant objects, possibly coupled with strategic de-prioritization of colour memory for irrelevant objects.  相似文献   

7.
Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via γ-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).  相似文献   

8.
It has been shown that fluid intelligence (gf) is fundamental to overcome interference due to information of a previously encoded item along a task-relevant domain. However, the biasing effect of task-irrelevant dimensions is still unclear as well as its relation with gf. The present study aimed at clarifying these issues. Gf was assessed in 60 healthy subjects. In a different session, the same subjects performed two versions (letter-detection and spatial) of a three-back working memory task with a set of physically identical stimuli (letters) presented at different locations on the screen. In the letter-detection task, volunteers were asked to match stimuli on the basis of their identity whereas, in the spatial task, they were required to match items on their locations. Cross-domain bias was manipulated by pseudorandomly inserting a match between the current and the three back items on the irrelevant domain. Our findings showed that a task-irrelevant feature of a salient stimulus can actually bias the ongoing performance. We revealed that, at trials in which the current and the three-back items matched on the irrelevant domain, group accuracy was lower (interference). On the other hand, at trials in which the two items matched on both the relevant and irrelevant domains, the group showed an enhancement of the performance (facilitation). Furthermore, we demonstrated that individual differences in fluid intelligence covaries with the ability to override cross-domain interference in that higher gf subjects showed better performance at interference trials than low gf subjects. Altogether, our findings suggest that stimulus features irrelevant to the task can affect cognitive performance along the relevant domain and that gf plays an important role in protecting relevant memory contents from the hampering effect of such a bias.  相似文献   

9.
Is object search mediated by object-based or image-based representations?   总被引:1,自引:0,他引:1  
Newell FN  Brown V  Findlay JM 《Spatial Vision》2004,17(4-5):511-541
Recent research suggests that visually specific memory representations for previously fixated objects are maintained during scene perception. Here we investigate the degree of visual specificity by asking whether the memory representations are image-based or object-based. To that end we measured the effects of object orientation on the time to search for a familiar object from amongst a set of 7 familiar distractors arranged in a circular array. Search times were found to depend on the relative orientations of the target object and the probe object for both familiar and novel objects. This effect was found to be partly an image matching effect but there was also an advantage shown for the object's canonical view for familiar objects. Orientation effects were maintained even when the target object was specified as having unique or similar shape properties relative to the distractors. Participants' eye movements were monitored during two of the experiments. Eye movement patterns revealed selection for object shape and object orientation during the search process. Our findings provide evidence for object representations during search that are detailed and share image-based characteristics with more high-level characteristics from object memory.  相似文献   

10.
Little is known about the timing of activating memory for objects and their associated perceptual properties, such as colour, and yet this is important for theories of human cognition. We investigated the time course associated with early cognitive processes related to the activation of object shape and object shape+colour representations respectively, during memory retrieval as assessed by repetition priming in an event-related potential (ERP) study. The main findings were as follows: (1) we identified a unique early modulation of mean ERP amplitude during the N1 that was associated with the activation of object shape independently of colour; (2) we also found a subsequent early P2 modulation of mean amplitude over the same electrode clusters associated with the activation of object shape+colour representations; (3) these findings were apparent across both familiar (i.e., correctly coloured – yellow banana) and novel (i.e., incorrectly coloured - blue strawberry) objects; and (4) neither of the modulations of mean ERP amplitude were evident during the P3. Together the findings delineate the timing of object shape and colour memory systems and support the notion that perceptual representations of object shape mediate the retrieval of temporary shape+colour representations for familiar and novel objects.  相似文献   

11.

Background

In the human visual system, different attributes of an object, such as shape, color, and motion, are processed separately in different areas of the brain. This raises a fundamental question of how are these attributes integrated to produce a unified perception and a specific response. This “binding problem” is computationally difficult because all attributes are assumed to be bound together to form a single object representation. However, there is no firm evidence to confirm that such representations exist for general objects.

Methodology/Principal Findings

Here we propose a paired-attribute model in which cognitive processes are based on multiple representations of paired attributes. In line with the model''s prediction, we found that multiattribute stimuli can produce an illusory perception of a multiattribute object arising from erroneous integration of attribute pairs, implying that object recognition is based on parallel perception of paired attributes. Moreover, in a change-detection task, a feature change in a single attribute frequently caused an illusory perception of change in another attribute, suggesting that multiple pairs of attributes are stored in memory.

Conclusions/Significance

The paired-attribute model can account for some novel illusions and controversial findings on binocular rivalry and short-term memory. Our results suggest that many cognitive processes are performed at the level of paired attributes rather than integrated objects, which greatly facilitates the binding problem and provides simpler solutions for it.  相似文献   

12.
Drawing portraits upside down is a trick that allows novice artists to reproduce lower-level image features, e.g., contours, while reducing interference from higher-level face cognition. Limiting the available processing time to suffice for lower- but not higher-level operations is a more general way of reducing interference. We elucidate this interference in a novel visual-search task to find a target among distractors. The target had a unique lower-level orientation feature but was identical to distractors in its higher-level object shape. Through bottom-up processes, the unique feature attracted gaze to the target. Subsequently, recognizing the attended object as identically shaped as the distractors, viewpoint invariant object recognition interfered. Consequently, gaze often abandoned the target to search elsewhere. If the search stimulus was extinguished at time T after the gaze arrived at the target, reports of target location were more accurate for shorter (T<500 ms) presentations. This object-to-feature interference, though perhaps unexpected, could underlie common phenomena such as the visual-search asymmetry that finding a familiar letter N among its mirror images is more difficult than the converse. Our results should enable additional examination of known phenomena and interactions between different levels of visual processes.  相似文献   

13.

Background

How do people sustain a visual representation of the environment? Currently, many researchers argue that a single visual working memory system sustains non-spatial object information such as colors and shapes. However, previous studies tested visual working memory for two-dimensional objects only. In consequence, the nature of visual working memory for three-dimensional (3D) object representation remains unknown.

Methodology/Principal Findings

Here, I show that when sustaining information about 3D objects, visual working memory clearly divides into two separate, specialized memory systems, rather than one system, as was previously thought. One memory system gradually accumulates sensory information, forming an increasingly precise view-dependent representation of the scene over the course of several seconds. A second memory system sustains view-invariant representations of 3D objects. The view-dependent memory system has a storage capacity of 3–4 representations and the view-invariant memory system has a storage capacity of 1–2 representations. These systems can operate independently from one another and do not compete for working memory storage resources.

Conclusions/Significance

These results provide evidence that visual working memory sustains object information in two separate, specialized memory systems. One memory system sustains view-dependent representations of the scene, akin to the view-specific representations that guide place recognition during navigation in humans, rodents and insects. The second memory system sustains view-invariant representations of 3D objects, akin to the object-based representations that underlie object cognition.  相似文献   

14.
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer''s discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.  相似文献   

15.
To solve novel problems, it is advantageous to abstract relevant information from past experience to transfer on related problems. To study whether macaque monkeys were able to transfer an abstract rule across cognitive domains, we trained two monkeys on a nonmatch-to-goal (NMTG) task. In the object version of the task (O-NMTG), the monkeys were required to choose between two object-like stimuli, which differed either only in shape or in shape and color. For each choice, they were required to switch from their previously chosen object-goal to a different one. After they reached a performance level of over 90% correct on the O-NMTG task, the monkeys were tested for rule transfer on a spatial version of the task (S-NMTG). To receive a reward, the monkeys had to switch from their previously chosen location to a different one. In both the O-NMTG and S-NMTG tasks, there were four potential choices, presented in pairs from trial-to-trial. We found that both monkeys transferred successfully the NMTG rule within the first testing session, showing effective transfer of the learned rule between two cognitive domains.  相似文献   

16.
Can nonhuman animals attend to visual stimuli as whole, coherent objects? We investigated this question by adapting for use with pigeons a task in which human participants must report whether two visual attributes belong to the same object (one-object trial) or to different objects (two-object trial). We trained pigeons to discriminate a pair of differently colored shapes that had two targets either on a single object or on two different objects. Each target equally often appeared on the one-object and two-object stimuli; therefore, a specific target location could not serve as a discriminative cue. The pigeons learned to report whether the two target dots were located on a single object or on two different objects; follow-up tests demonstrated that this ability was not entirely based on memorization of the dot patterns and locations. Additional tests disclosed predominate stimulus control by the color, but not by the shape of the two objects. These findings suggest that human psychophysical methods are readily applicable to the study of object discrimination by nonhuman animals.  相似文献   

17.
Neural processing at most stages of the primate visual system is modulated by selective attention, such that behaviorally relevant information is emphasized at the expenses of irrelevant, potentially distracting information. The form of attention best understood at the cellular level is when stimuli at a given location in the visual field must be selected (space-based attention). In contrast, fewer single-unit recording studies have so far explored the cellular mechanisms of attention operating on individual stimulus features, specifically when one feature (e.g., color) of an object must guide behavioral responses while a second feature (e.g., shape) of the same object is potentially interfering and therefore must be ignored. Here we show that activity of neurons in macaque area V4 can underlie the selection of elemental object features and their "translation" into a categorical format that can directly contribute to the control of the animal's behavior.  相似文献   

18.
In the present paper we describe five tests, 3 of which were designed to be similar to tasks used with rodents. Results obtained from control subjects, patients with selective thermo-coagulation lesions to the medial temporal lobe and results from non-human primates and rodents are discussed. The tests involve memory for spatial locations acquired by moving around in a room, memory for objects subjects interacted with, or memory for objects and their locations. Two of the spatial memory tasks were designed specifically as analogs of the Morris water task and the 8-arm radial-maze tasks used with rats. The Morris water task was modeled by hiding a sensor under the carpet of a room (Invisible Sensor Task). Subjects had to learn its location by using an array of visual cues available in the room. A path integration task was developed in order to study the non-visual acquisition of a cognitive representation of the spatial location of objects. In the non-visual spatial memory task, we blindfolded subjects and led them to a room where they had to find 3 objects and remember their locations. We designed an object location task by placing 4 objects in a room that subjects observed for later recall of their locations. A recognition task, and a novelty detection task were given subsequent to the recall task. An 8-arm radial-maze was recreated by placing stands at equal distance from each other around the room, and asking subjects to visit each stand once, from a central point. A non-spatial working memory task was designed to be the non-spatial equivalent of the radial maze. Search paths recorded on the first trial of the Invisible Sensor Task, when subjects search for the target by trial and error are reported. An analysis of the search paths revealed that patients with lesions to the right or left hippocampus or parahippocampal cortex employed the same type of search strategies as normal controls did, showing similarities and differences to the search behavior recorded in rats. Interestingly, patients with lesions that included the right parahippocampal cortex were impaired relative to patients with lesions to the right hippocampus that spared the parahippocampal cortex, when recall of the sensor was tested after a 30 min delay (Bohbot et al. 1998). No differences were obtained between control subjects and patients with selective thermal lesions to the medial temporal lobe, when tested on the radial-maze, the non-spatial analogue to the radial-maze and the path integration tasks. Differences in methodological procedures, learning strategies and lesion location could account for some of the discrepant results between humans and non-human species. Patients with lesions to the right hippocampus, irrespective of whether the right parahippocampal cortex was spared or damaged, had difficulties remembering the particular configuration and identity of objects in the novelty detection of the object location task. This supports the role of the human right hippocampus for spatial memory, in this case, involving memory for the location of elements in the room; learning known to require the hippocampus in the rat.  相似文献   

19.
Visual working memory (VWM) is known as a highly capacity-limited cognitive system that can hold 3-4 items. Recent studies have demonstrated that activity in the intraparietal sulcus (IPS) and occipital cortices correlates with the number of representations held in VWM. However, differences among those regions are poorly understood, particularly when task-irrelevant items are to be ignored. The present fMRI-based study investigated whether memory load-sensitive regions such as the IPS and occipital cortices respond differently to task-relevant information. Using a change detection task in which participants are required to remember pre-specified targets, here we show that while the IPS exhibited comparable responses to both targets and distractors, the dorsal occipital cortex manifested significantly weaker responses to an array containing distractors than to an array containing only targets, despite that the number of objects presented was the same for the two arrays. These results suggest that parietal and occipital cortices engage differently in distractor processing and that the dorsal occipital, rather than parietal, activity appears to reflect output of stimulus filtering and selection based on behavioral relevance.  相似文献   

20.
Visual saliency is a fundamental yet hard to define property of objects or locations in the visual world. In a context where objects and their representations compete to dominate our perception, saliency can be thought of as the "juice" that makes objects win the race. It is often assumed that saliency is extracted and represented in an explicit saliency map, which serves to determine the location of spatial attention at any given time. It is then by drawing attention to a salient object that it can be recognized or categorized. I argue against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition. A number of visual processing tasks are clearly performed too fast for such a costly strategy to be employed. Rather, visual attention could simply act by biasing a saliency-based object recognition system. Under natural conditions of stimulation, saliency can be represented implicitly throughout the ventral visual pathway, independent of any explicit saliency map. At any given level, the most activated cells of the neural population simply represent the most salient locations. The notion of saliency itself grows increasingly complex throughout the system, mostly based on luminance contrast until information reaches visual cortex, gradually incorporating information about features such as orientation or color in primary visual cortex and early extrastriate areas, and finally the identity and behavioral relevance of objects in temporal cortex and beyond. Under these conditions the object that dominates perception, i.e. the object yielding the strongest (or the first) selective neural response, is by definition the one whose features are most "salient"--without the need for any external saliency map. In addition, I suggest that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population. In accordance with our subjective experience that saliency and attention do not modify the appearance of objects, the feed-forward propagation of this first spike wave could serve to trigger saliency-based object recognition outside the realm of awareness, while conscious perceptions could be mediated by the remaining discharges of longer neuronal spike trains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号