首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The perception of natural scenes relies on the integration of pre-existing knowledge with the immediate results of attentional processing, and what can be remembered from a scene depends in turn on how that scene is perceived and understood. However, there are conflicting results in the literature as to whether people are more likely to remember those objects that are consistent with the scene or those that are not. Moreover, whether any discrepancy between the likelihood of remembering schema-consistent or schema-inconsistent objects should be attributed to the schematic effects on attention or on memory remains unclear. To address this issue, the current study attempted to directly manipulate attention allocation by requiring participants to look at (i) schema-consistent objects, (ii) schema-inconsistent objects, or (iii) to share attention equally across both. Regardless of the differential allocation of attention or object fixation, schema-consistent objects were better recalled whereas recognition was independent of schema-consistency, but depended on task instruction. These results suggest that attention is important both for remembering low-level object properties, and information whose retrieval is not supported by the currently active schema. Specific knowledge of the scenes being viewed can result in the recall of non-fixated objects, but without such knowledge attention is required to encode sufficient detail for subsequent recognition. Our results demonstrate therefore that attention is not critical for the retrieval of objects that are consistent with a scene's schematic content.  相似文献   

2.
The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects'' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process by which objects interfere with scene recognition is separate from the one through which they inform it.  相似文献   

3.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

4.
Shading is known to produce vivid perceptions of depth. However, the influence of specular highlights on perceived shape is unclear: some studies have shown that highlights improve quantitative shape perception while others have shown no effect. Here we ask how specular highlights combine with Lambertian shading cues to determine perceived surface curvature, and to what degree this is based upon a coherent model of the scene geometry. Observers viewed ambiguous convex/concave shaded surfaces, with or without highlights. We show that the presence/absence of specular highlights has an effect on qualitative shape, their presence biasing perception toward convex interpretations of ambiguous shaded objects. We also find that the alignment of a highlight with the Lambertian shading modulates its effect on perceived shape; misaligned highlights are less likely to be perceived as specularities, and thus have less effect on shape perception. Increasing the depth of the surface or the slant of the illuminant also modulated the effect of the highlight, increasing the bias toward convexity. The effect of highlights on perceived shape can be understood probabilistically in terms of scene geometry: for deeper objects and/or highly slanted illuminants, highlights will occur on convex but not concave surfaces, due to occlusion of the illuminant. Given uncertainty about the exact object depth and illuminant direction, the presence of a highlight increases the probability that the surface is convex.  相似文献   

5.
Contextual information can have a huge impact on our sensory experience. The tilt illusion is a classic example of contextual influence exerted by an oriented surround on a target''s perceived orientation. Traditionally, the tilt illusion has been described as the outcome of inhibition between cortical neurons with adjacent receptive fields and a similar preference for orientation. An alternative explanation is that tilted contexts could produce a re-calibration of the subjective frame of reference. Although the distinction is subtle, only the latter model makes clear predictions for unoriented stimuli. In the present study, we tested one such prediction by asking four naive subjects to estimate three positions (4, 6, and 8 o''clock) on an imaginary clock face within a tilted surround. To indicate their estimates, they used either an unoriented dot or a line segment, with one endpoint at fixation in the middle of the surround. The surround''s tilt was randomly chosen from a set of orientations (±75°, ±65°, ±55°, ±45°, ±35°, ±25°, ±15°, ±5° with respect to vertical) across trials. Our results showed systematic biases consistent with the tilt illusion in both conditions. Biases were largest when observers attempted to estimate the 4 and 8 o''clock positions, but there was no significant difference between data gathered with the dot and data gathered with the line segment. A control experiment confirmed that biases were better accounted for by a local coordinate shift than to torsional eye movements induced by the tilted context. This finding supports the idea that tilted contexts distort perceived positions as well as perceived orientations and cannot be readily explained by lateral interactions between orientation selective cells in V1.  相似文献   

6.
We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle) of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1). When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2). This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3). Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.  相似文献   

7.
Brain regions in the intraparietal and the premotor cortices selectively process visual and multisensory events near the hands (peri-hand space). Visual information from the hand itself modulates this processing potentially because it is used to estimate the location of one’s own body and the surrounding space. In humans specific occipitotemporal areas process visual information of specific body parts such as hands. Here we used an fMRI block-design to investigate if anterior intraparietal and ventral premotor ‘peri-hand areas’ exhibit selective responses to viewing images of hands and viewing specific hand orientations. Furthermore, we investigated if the occipitotemporal ‘hand area’ is sensitive to viewed hand orientation. Our findings demonstrate increased BOLD responses in the left anterior intraparietal area when participants viewed hands and feet as compared to faces and objects. Anterior intraparietal and also occipitotemporal areas in the left hemisphere exhibited response preferences for viewing right hands with orientations commonly viewed for one’s own hand as compared to uncommon own hand orientations. Our results indicate that both anterior intraparietal and occipitotemporal areas encode visual limb-specific shape and orientation information.  相似文献   

8.
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved. Previous evidence shows that the human visual system accounts for the distance the observer has walked and the separation of the eyes when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.  相似文献   

9.
The orientation of cellulose microfibrils (MFs) and the arrangement of cortical microtubules (MTs) in the developing tension-wood fibres of Japanese ash (Fraxinus mandshurica Rupr. var. japonica Maxim.) trees were investigated by electron and immunofluorescence microscopy. The MFs were deposited at an angle of about 45° to the longitudinal axis of the fibre in an S-helical orientation at the initiation of secondary wall thickening. The MFs changed their orientation progressively, with clockwise rotation (viewed from the lumen side), from the S-helix until they were oriented approximately parallel to the fibre axis. This configuration can be considered as a semihelicoidal pattern. With arresting of rotation, a thick gelatinous (G-) layer was developed as a result of the repeated deposition of parallel MFs with a consistent texture. Two types of gelatinous fibre were identified on the basis of the orientation of MFs at the later stage of G-layer deposition. Microfibrils of type 1 were oriented parallel to the fibre axis; MFs of type 2 were laid down with counterclockwise rotation. The counterclockwise rotation of MFs was associated with a variation in the angle of MFs with respect to the fibre axis that ranged from 5° to 25° with a Z-helical orientation among the fibres. The MFs showed a high degree of parallelism at all stages of deposition during G-layer formation. No MFs with an S-helical orientation were observed in the G-layer. Based on these results, a model for the orientation and deposition of MFs in the secondary wall of tension-wood fibres with an S1 + G type of wall organization is proposed. The MT arrays changed progressively, with clockwise rotation (viewed from the lumen side), from an angle of about 35–40° in a Z-helical orientation to an angle of approximately 0° (parallel) to the fibre axis during G-layer formation. The parallelism between MTs and MFs was evident. The density of MTs in the developing tension-wood fibres during formation of the G-layer was about 17–18 per m of wall. It appears that MTs with a high density play a significant role in regulating the orientation of nascent MFs in the secondary walls of wood fibres. It also appears that the high degree of parallelism among MFs is closely related to the parallelism of MTs that are present at a high density.Abbreviations FE-SEM field emission scanning electron microscopy - G gelatinous layer - MF cellulose microfibril - MT cortical microtubule - S1 outermost layer of the secondary wall - TEM transmission electron microscopy We thank Dr. Y. Akibayashi, Mr. Y. Sano and Mr. T. Itoh of the Faculty of Agriculture, Hokkaido University, for their experimental or technical assistance.  相似文献   

10.
It has long been assumed that there is a distorted mapping between real and 'perceived' space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A>B>D yet also A相似文献   

11.
12.
The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.  相似文献   

13.
The processes underlying object recognition are fundamental for the understanding of visual perception. Humans can recognize many objects rapidly even in complex scenes, a task that still presents major challenges for computer vision systems. A common experimental demonstration of this ability is the rapid animal detection protocol, where human participants earliest responses to report the presence/absence of animals in natural scenes are observed at 250–270 ms latencies. One of the hypotheses to account for such speed is that people would not actually recognize an animal per se, but rather base their decision on global scene statistics. These global statistics (also referred to as spatial envelope or gist) have been shown to be computationally easy to process and could thus be used as a proxy for coarse object recognition. Here, using a saccadic choice task, which allows us to investigate a previously inaccessible temporal window of visual processing, we showed that animal – but not vehicle – detection clearly precedes scene categorization. This asynchrony is in addition validated by a late contextual modulation of animal detection, starting simultaneously with the availability of scene category. Interestingly, the advantage for animal over scene categorization is in opposition to the results of simulations using standard computational models. Taken together, these results challenge the idea that rapid animal detection might be based on early access of global scene statistics, and rather suggests a process based on the extraction of specific local complex features that might be hardwired in the visual system.  相似文献   

14.
Human observers are especially sensitive to the actions of conspecifics that match their own actions. This has been proposed to be critical for social interaction, providing the basis for empathy and joint action. However, the precise relation between observed and executed actions is still poorly understood. Do ongoing actions change the way observers perceive others' actions? To pursue this question, we exploited the bistability of depth-ambiguous point-light walkers, which can be perceived as facing towards the viewer or as facing away from the viewer. We demonstrate that point-light walkers are perceived more often as facing the viewer when the observer is walking on a treadmill compared to when the observer is performing an action that does not match the observed behavior (e.g., cycling). These findings suggest that motor processes influence the perceived orientation of observed actions: Acting observers tend to perceive similar actions by conspecifics as oriented towards themselves. We discuss these results in light of the possible mechanisms subtending action-induced modulation of perception.  相似文献   

15.
Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype.  相似文献   

16.
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.  相似文献   

17.
Is object search mediated by object-based or image-based representations?   总被引:1,自引:0,他引:1  
Newell FN  Brown V  Findlay JM 《Spatial Vision》2004,17(4-5):511-541
Recent research suggests that visually specific memory representations for previously fixated objects are maintained during scene perception. Here we investigate the degree of visual specificity by asking whether the memory representations are image-based or object-based. To that end we measured the effects of object orientation on the time to search for a familiar object from amongst a set of 7 familiar distractors arranged in a circular array. Search times were found to depend on the relative orientations of the target object and the probe object for both familiar and novel objects. This effect was found to be partly an image matching effect but there was also an advantage shown for the object's canonical view for familiar objects. Orientation effects were maintained even when the target object was specified as having unique or similar shape properties relative to the distractors. Participants' eye movements were monitored during two of the experiments. Eye movement patterns revealed selection for object shape and object orientation during the search process. Our findings provide evidence for object representations during search that are detailed and share image-based characteristics with more high-level characteristics from object memory.  相似文献   

18.
19.
Contextual effects are ubiquitous in vision and reveal fundamental principles of sensory coding. Here, we demonstrate that an oriented surround grating can affect the perceived orientation of a central test grating even when backward masking of the surround prevents its orientation from being consciously perceived. The effect survives introduction of a gap between test and surround of over a degree even under masking, suggesting either that contextual information can effectively propagate across early visual cortex in the absence of awareness of the signaled context or that it can proceed undetected to higher processing levels at which such horizontal propagation may not be necessary. The effect under masking also shows partial interocular transfer, demonstrating processing of orientation by binocular neurons in visual cortex in the absence of conscious orientation perception. This pattern of results is consistent with the suggestion that simultaneous orientation contrast is mediated at multiple levels of the visual processing hierarchy, and it supports the view that propagation of signals to and, possibly, back from higher visual areas is necessary for conscious perception.  相似文献   

20.
Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes’ views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators.This article is part of the themed issue ‘Vision in our three-dimensional world’.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号