首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Color-based object selection — for instance, looking for ripe tomatoes in the market — places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly.  相似文献   

2.
Grossberg S  Hong S 《Spatial Vision》2006,19(2-4):263-321
A neural model is proposed of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models clarify how the brain can compute relative contrast. The anchored Filling-In Lightness Model (aFILM) clarifies how the brain 'anchors' lightness percepts to determine an absolute lightness scale that uses the full dynamic range of neurons. The model quantitatively simulates lightness anchoring properties (Articulation, Insulation, Configuration, Area Effect) and other lightness data (discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, Craik-O'Brien-Cornsweet illusion). The model clarifies how retinal processing stages achieve light adaptation and spatial contrast adaptation, and how cortical processing stages fill-in surface lightness using long-range horizontal connections that are gated by boundary signals. The new filling-in mechanism runs 1000 times faster than diffusion mechanisms of previous filling-in models.  相似文献   

3.
Adaptation is an automatic neural mechanism supporting the optimization of visual processing on the basis of previous experiences. While the short-term effects of adaptation on behaviour and physiology have been studied extensively, perceptual long-term changes associated with adaptation are still poorly understood. Here, we show that the integration of adaptation-dependent long-term shifts in neural function is facilitated by sleep. Perceptual shifts induced by adaptation to a distorted image of a famous person were larger in a group of participants who had slept (experiment 1) or merely napped for 90 min (experiment 2) during the interval between adaptation and test compared with controls who stayed awake. Participants'' individual rapid eye movement sleep duration predicted the size of post-sleep behavioural adaptation effects. Our data suggest that sleep prevented decay of adaptation in a way that is qualitatively different from the effects of reduced visual interference known as ‘storage’. In the light of the well-established link between sleep and memory consolidation, our findings link the perceptual mechanisms of sensory adaptation—which are usually not considered to play a relevant role in mnemonic processes—with learning and memory, and at the same time reveal a new function of sleep in cognition.  相似文献   

4.
Lightness illusions are fundamental to human perception, and yet why we see them is still the focus of much research. Here we address the question by modelling not human physiology or perception directly as is typically the case but our natural visual world and the need for robust behaviour. Artificial neural networks were trained to predict the reflectance of surfaces in a synthetic ecology consisting of 3-D “dead-leaves” scenes under non-uniform illumination. The networks learned to solve this task accurately and robustly given only ambiguous sense data. In addition—and as a direct consequence of their experience—the networks also made systematic “errors” in their behaviour commensurate with human illusions, which includes brightness contrast and assimilation—although assimilation (specifically White's illusion) only emerged when the virtual ecology included 3-D, as opposed to 2-D scenes. Subtle variations in these illusions, also found in human perception, were observed, such as the asymmetry of brightness contrast. These data suggest that “illusions” arise in humans because (i) natural stimuli are ambiguous, and (ii) this ambiguity is resolved empirically by encoding the statistical relationship between images and scenes in past visual experience. Since resolving stimulus ambiguity is a challenge faced by all visual systems, a corollary of these findings is that human illusions must be experienced by all visual animals regardless of their particular neural machinery. The data also provide a more formal definition of illusion: the condition in which the true source of a stimulus differs from what is its most likely (and thus perceived) source. As such, illusions are not fundamentally different from non-illusory percepts, all being direct manifestations of the statistical relationship between images and scenes.  相似文献   

5.
Visual illusions and other perceptual phenomena can be used as tools to uncover the otherwise hidden constructive processes that give rise to perception. Although many perceptual processes are assumed to be universal, variable susceptibility to certain illusions and perceptual effects across populations suggests a role for factors that vary culturally. One striking phenomenon is seen with two-tone images—photos reduced to two tones: black and white. Deficient recognition is observed in young children under conditions that trigger automatic recognition in adults. Here we show a similar lack of cue-triggered perceptual reorganization in the Pirahã, a hunter-gatherer tribe with limited exposure to modern visual media, suggesting such recognition is experience- and culture-specific.  相似文献   

6.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

7.
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.  相似文献   

8.
When two different odorants are presented simultaneously to the two nostrils, we experience alternations in olfactory percepts, a phenomenon called binaral rivalry. Little is known about the nature of such alternations. Here we investigate this issue by subjecting unstable and stable olfactory percepts to the influences of visual perceptual or semantic cues as participants engage in simultaneous samplings of either two different odorants (binaral) or a single odorant and water (mononaral), one to each nostril. We show that alternations of olfactory percepts in the binaral setting persist in the presence of visual perceptual and semantic modulations. We also show that perceptual cues have a stronger effect than semantic cues in the binaral case, whereas their effects are comparable in the mononaral setting. Our findings provide evidence that an inherent, stimulus-driven process underlies binaral rivalry despite its general susceptibility to top-down influences.  相似文献   

9.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

10.
Perceived depth is conveyed by multiple cues, including binocular disparity and luminance shading. Depth perception from luminance shading information depends on the perceptual assumption for the incident light, which has been shown to default to a diffuse illumination assumption. We focus on the case of sinusoidally corrugated surfaces to ask how shading and disparity cues combine defined by the joint luminance gradients and intrinsic disparity modulation that would occur in viewing the physical corrugation of a uniform surface under diffuse illumination. Such surfaces were simulated with a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0’-20’. The observers’ task was to adjust the binocular disparity of a comparison random-dot stereogram surface to match the perceived depth of the joint luminance/disparity-modulated corrugation target. Regardless of target spatial frequency, the perceived target depth increased with the luminance contrast and depended on luminance phase but was largely unaffected by the luminance disparity modulation. These results validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. For depth judgments with combined cues, the observers gave much greater weighting to the luminance shading than to the disparity modulation of the targets. The results were not well-fit by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue in isolation. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios.  相似文献   

11.
In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that — in addition to commonly used geometric information — makes use of a novel multi–modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints.  相似文献   

12.
How spiking neurons cooperate to control behavioral processes is a fundamental problem in computational neuroscience. Such cooperative dynamics are required during visual perception when spatially distributed image fragments are grouped into emergent boundary contours. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity occur in response to binary spikes with irregular timing across many interacting cells. Some models have demonstrated spiking dynamics in recurrent laminar neocortical circuits, but not how perceptual grouping occurs. Other models have analyzed the fast speed of certain percepts in terms of a single feedforward sweep of activity, but cannot explain other percepts, such as illusory contours, wherein perceptual ambiguity can take hundreds of milliseconds to resolve by integrating multiple spikes over time. The current model reconciles fast feedforward with slower feedback processing, and binary spikes with analog network-level properties, in a laminar cortical network of spiking cells whose emergent properties quantitatively simulate parametric data from neurophysiological experiments, including the formation of illusory contours; the structure of non-classical visual receptive fields; and self-synchronizing gamma oscillations. These laminar dynamics shed new light on how the brain resolves local informational ambiguities through the use of properly designed nonlinear feedback spiking networks which run as fast as they can, given the amount of uncertainty in the data that they process.  相似文献   

13.
In this paper, we describe a statistically based algorithm to quantify the uniformity of illumination in an optical light microscopy imaging system that outputs a single quality factor (QF) score. The importance of homogeneous field illumination in quantitative light microscopy is well understood and often checked. However, there is currently no standard automatic quantitative measure of the uniformity of the field illumination. Images from 89 different laser-scanning confocal microscopes (LSCMs), which were collected as part of an international study on microscope quality assessment, were used as a “training” set to build the algorithm. To validate the algorithm and verify its robustness, images from 33 additional microscopes, including LSCM and wide-field (WF) microscopes, were used. The statistical paradigm used for developing the quality scoring scale was a regression approach to supervised learning. Three intensity profiles across each image—2 corner-to-corner diagonals and a center horizontal—were used to generate pixel-intensity data. All of the lines passed through the center of the image. The intensity profile data then were converted into a single-field illumination QF score in the range of 0–100, with 0 having extreme variation, and therefore, essentially unusable, and 100 having no deviation, i.e., straight lines with a constant uniform intensity. Empirically, a QF ≥ 83 was determined to be the minimum acceptable value based on manufacturer acceptance tests and reasonably achievable values. This new QF is an invaluable metric to ascertain objectively and easily the uniformity of illumination quality, provide a traceable reference for monitoring field uniformity over time, and make a direct comparison among different microscopes. The QF can also be used as an indicator of system failure and the need for alignment or service of the instrument.  相似文献   

14.
Electrophysiological oscillations in different frequency bands co-occur with perceptual, motor and cognitive processes but their function and respective contributions to these processes need further investigations. Here, we recorded MEG signals and seek for percept related modulations of alpha, beta and gamma band activity during a perceptual form/motion integration task. Participants reported their bound or unbound perception of ambiguously moving displays that could either be seen as a whole square-like shape moving along a Lissajou''s figure (bound percept) or as pairs of bars oscillating independently along cardinal axes (unbound percept). We found that beta (15–25 Hz), but not gamma (55–85 Hz) oscillations, index perceptual states at the individual and group level. The gamma band activity found in the occipital lobe, although significantly higher during visual stimulation than during base line, is similar in all perceptual states. Similarly, decreased alpha activity during visual stimulation is not different for the different percepts. Trial-by-trial classification of perceptual reports based on beta band oscillations was significant in most observers, further supporting the view that modulation of beta power reliably index perceptual integration of form/motion stimuli, even at the individual level.  相似文献   

15.
An apparatus was constructed in order to record continuously and simultaneously changes in extinction and electrical conductance of rhodopsin solutions. With this apparatus, changes in electrical conductance on exposing rhodopsin to light were investigated. On illumination solutions of rhodopsin revealed a conductance change so long as they preserved their photosensitivity. The conductance change begins almost immediately upon illumination and is almost proportional to the amount of rhodopsin decomposed, continuing until rhodopsin is converted to indicator yellow. Near pH 7 the conductance is apt to increase slightly, while it decreases considerably outside the range of pH 6–9, being accompanied by a pH change towards neutrality. The conductance change is regarded as an essential property of rhodopsin, because it occurs in aqueous suspension as well as in digitonin solution; it may be caused by hydrogen or hydroxyl ions and some other conductive substances. It is also noteworthy that the petroleum ether-soluble component of the rod outer segments—presumably the lipide—tends to increase the conductance change. In suspensions of rod outer segments and retinal homogenates, the conductance increases on illumination irrespective of pH: this may be due to secondary reactions following the photic reaction of rhodopsin. We shall discuss the significance of the conductance change in relation to the initiation of visual excitation.  相似文献   

16.
Our ability to perceive a stable visual world in the presence of continuous movements of the body, head, and eyes has puzzled researchers in the neuroscience field for a long time. We reformulated this problem in the context of hierarchical convolutional neural networks (CNNs)—whose architectures have been inspired by the hierarchical signal processing of the mammalian visual system—and examined perceptual stability as an optimization process that identifies image-defining features for accurate image classification in the presence of movements. Movement signals, multiplexed with visual inputs along overlapping convolutional layers, aided classification invariance of shifted images by making the classification faster to learn and more robust relative to input noise. Classification invariance was reflected in activity manifolds associated with image categories emerging in late CNN layers and with network units acquiring movement-associated activity modulations as observed experimentally during saccadic eye movements. Our findings provide a computational framework that unifies a multitude of biological observations on perceptual stability under optimality principles for image classification in artificial neural networks.  相似文献   

17.
Suzuki S  Grabowecky M 《Neuron》2002,36(1):143-157
When a different pattern is presented to each eye, the perceived image spontaneously alternates between the two patterns (binocular rivalry); the dynamics of these bistable alternations are known to be stochastic. Examining multistable binocular rivalry (involving four dominant percepts), we demonstrated path dependence and on-line adaptation, which were equivalent whether perceived patterns were formed by single-eye dominance or by mixed-eye dominance. The spontaneous perceptual transitions tended to get trapped within a pair of related global patterns (e.g., opponent shapes and symmetric patterns), and during such trapping, the probability of returning to the repeatedly experienced patterns gradually decreased (postselection pattern adaptation). These results suggest that the structure of global shape coding and its adaptation play a critical role in directing spontaneous alternations of visual awareness in perceptual multistability.  相似文献   

18.
Recent research has witnessed an explosive increase in models that treat percepts as optimal probabilistic inference. The ubiquity of partial camouflage and occlusion in natural scenes, and the demonstrated capacity of the visual system to synthesize coherent contours and surfaces from fragmented image data, has inspired numerous attempts to model visual interpolation processes as rational inference. Here, we report striking new forms of visual interpolation that generate highly improbable percepts. We present motion displays depicting simple occlusion sequences that elicit vivid percepts of illusory contours (ICs) in displays for which they play no necessary explanatory role. These ICs define a second, redundant occluding surface, even though all of the image data can be fully explained by an occluding surface that is clearly visible. The formation of ICs in these images therefore entails an extraordinarily improbable co-occurrence of two occluding surfaces that arise from the same local occlusion events. The perceived strength of the ICs depends on simple low-level image properties, which suggests that they emerge as the outputs of mechanisms that automatically synthesize contours from the pattern of occlusion and disocclusion of local contour segments. These percepts challenge attempts to model visual interpolation as a form of rational inference and suggest the need to consider a broader space of computational problems and/or implementation level constraints to understand their genesis.  相似文献   

19.
Evanescent light—light that does not propagate but instead decays in intensity over a subwavelength distance—appears in both excitation (as in total internal reflection) and emission (as in near-field imaging) forms in fluorescence microscopy. This review describes the physical connection between these two forms as a consequence of geometrical squeezing of wavefronts, and describes newly established or speculative applications and combinations of the two. In particular, each can be used in analogous ways to produce surface-selective images, to examine the thickness and refractive index of films (such as lipid multilayers or protein layers) on solid supports, and to measure the absolute distance of a fluorophore to a surface. In combination, the two forms can further increase selectivity and reduce background scattering in surface images. The polarization properties of each lead to more sensitive and accurate measures of fluorophore orientation and membrane micromorphology. The phase properties of the evanescent excitation lead to a method of creating a submicroscopic area of total internal reflection illumination or enhanced-resolution structured illumination. Analogously, the phase properties of evanescent emission lead to a method of producing a smaller point spread function, in a technique called virtual supercritical angle fluorescence.  相似文献   

20.
According to the World Economic Forum, the diffusion of unsubstantiated rumors on online social media is one of the main threats for our society. The disintermediated paradigm of content production and consumption on online social media might foster the formation of homogeneous communities (echo-chambers) around specific worldviews. Such a scenario has been shown to be a vivid environment for the diffusion of false claim. Not rarely, viral phenomena trigger naive (and funny) social responses—e.g., the recent case of Jade Helm 15 where a simple military exercise turned out to be perceived as the beginning of the civil war in the US. In this work, we address the emotional dynamics of collective debates around distinct kinds of information—i.e., science and conspiracy news—and inside and across their respective polarized communities. We find that for both kinds of content the longer the discussion the more the negativity of the sentiment. We show that comments on conspiracy posts tend to be more negative than on science posts. However, the more the engagement of users, the more they tend to negative commenting (both on science and conspiracy). Finally, zooming in at the interaction among polarized communities, we find a general negative pattern. As the number of comments increases—i.e., the discussion becomes longer—the sentiment of the post is more and more negative.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号