首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images.  相似文献   

2.
We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a “+” shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., “above” and “left”). In easier-to-name trials, both crosses were rotated 45° to form an “×” shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., “above” or “left”). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.  相似文献   

3.
Maus GW  Fischer J  Whitney D 《PloS one》2011,6(5):e19796
Crowding is a fundamental bottleneck in object recognition. In crowding, an object in the periphery becomes unrecognizable when surrounded by clutter or distractor objects. Crowding depends on the positions of target and distractors, both their eccentricity and their relative spacing. In all previous studies, position has been expressed in terms of retinal position. However, in a number of situations retinal and perceived positions can be dissociated. Does retinal or perceived position determine the magnitude of crowding? Here observers performed an orientation judgment on a target Gabor patch surrounded by distractors that drifted toward or away from the target, causing an illusory motion-induced position shift. Distractors in identical physical positions led to worse performance when they drifted towards the target (appearing closer) versus away from the target (appearing further). This difference in crowding corresponded to the difference in perceived positions. Further, the perceptual mislocalization was necessary for the change in crowding, and both the mislocalization and crowding scaled with drift speed. The results show that crowding occurs after perceived positions have been assigned by the visual system. Crowding does not operate in a purely retinal coordinate system; perceived positions need to be taken into account.  相似文献   

4.
Visual stimuli can be perceived at a broad, “global” level, or at a more focused, “local” level. While research has shown that many individuals demonstrate a preference for global information, there are large individual differences in the degree of global/local bias, such that some individuals show a large global bias, some show a large local bias, and others show no bias. The main purpose of the current study was to examine whether these dispositional differences in global/local bias could be altered through various manipulations of high/low spatial frequency. Through 5 experiments, we examined various measures of dispositional global/local bias and whether performance on these measures could be altered by manipulating previous exposure to high or low spatial frequency information (with high/low spatial frequency faces, gratings, and Navon letters). Ultimately, there was little evidence of change from pre-to-post manipulation on the dispositional measures, and dispositional global/local bias was highly reliable pre- to post-manipulation. The results provide evidence that individual differences in global/local bias or preference are relatively resistant to exposure to spatial frequency information, and suggest that the processing mechanisms underlying high/low spatial frequency use and global/local bias may be more independent than previously thought.  相似文献   

5.
In a typical experiment on decision making, one out of two possible stimuli is displayed and observers decide which one was presented. Recently, Stanford and colleagues (2010) introduced a new variant of this classical one-stimulus presentation paradigm to investigate the speed of decision making. They found evidence for “perceptual decision making in less than 30 ms”. Here, we extended this one-stimulus compelled-response paradigm to a two-stimulus compelled-response paradigm in which a vernier was followed immediately by a second vernier with opposite offset direction. The two verniers and their offsets fuse. Only one vernier is perceived. When observers are asked to indicate the offset direction of the fused vernier, the offset of the second vernier dominates perception. Even for long vernier durations, the second vernier dominates decisions indicating that decision making can take substantial time. In accordance with previous studies, we suggest that our results are best explained with a two-stage model of decision making where a leaky evidence integration stage precedes a race-to-threshold process.  相似文献   

6.
The question of which strategy is employed in human decision making has been studied extensively in the context of cognitive tasks; however, this question has not been investigated systematically in the context of perceptual tasks. The goal of this study was to gain insight into the decision-making strategy used by human observers in a low-level perceptual task. Data from more than 100 individuals who participated in an auditory-visual spatial localization task was evaluated to examine which of three plausible strategies could account for each observer''s behavior the best. This task is very suitable for exploring this question because it involves an implicit inference about whether the auditory and visual stimuli were caused by the same object or independent objects, and provides different strategies of how using the inference about causes can lead to distinctly different spatial estimates and response patterns. For example, employing the commonly used cost function of minimizing the mean squared error of spatial estimates would result in a weighted averaging of estimates corresponding to different causal structures. A strategy that would minimize the error in the inferred causal structure would result in the selection of the most likely causal structure and sticking with it in the subsequent inference of location—“model selection.” A third strategy is one that selects a causal structure in proportion to its probability, thus attempting to match the probability of the inferred causal structure. This type of probability matching strategy has been reported to be used by participants predominantly in cognitive tasks. Comparing these three strategies, the behavior of the vast majority of observers in this perceptual task was most consistent with probability matching. While this appears to be a suboptimal strategy and hence a surprising choice for the perceptual system to adopt, we discuss potential advantages of such a strategy for perception.  相似文献   

7.
BackgroundChina’s “13th 5-Year Plan” (2016–2020) for the prevention and control of sudden acute infectious diseases emphasizes that epidemic monitoring and epidemic focus surveys in key areas are crucial for strengthening national epidemic prevention and building control capacity. Establishing an epidemic hot spot areas and prediction model is an effective means of accurate epidemic monitoring and surveying. Objective: This study predicted hemorrhagic fever with renal syndrome (HFRS) epidemic hot spot areas, based on multi-source environmental variable factors. We calculated the contribution weight of each environmental factor to the morbidity risk, obtained the spatial probability distribution of HFRS risk areas within the study region, and detected and extracted epidemic hot spots, to guide accurate epidemic monitoring as well as prevention and control. Methods: We collected spatial HFRS data, as well as data on various types of natural and human social activity environments in Hunan Province from 2010 to 2014. Using the information quantity method and logistic regression modeling, we constructed a risk-area-prediction model reflecting the epidemic intensity and spatial distribution of HFRS. Results: The areas under the receiver operating characteristic curve of training samples and test samples were 0.840 and 0.816. From 2015 to 2019, HRFS case site verification showed that more than 82% of the cases occurred in high-risk areas.DiscussionThis research method could accurately predict HFRS hot spot areas and provided an evaluation model for Hunan Province. Therefore, this method could accurately detect HFRS epidemic high-risk areas, and effectively guide epidemic monitoring and surveyance.  相似文献   

8.
To interpret visual scenes, visual systems need to segment or integrate multiple moving features into distinct objects or surfaces. Previous studies have found that the perceived direction separation between two transparently moving random-dot stimuli is wider than the actual direction separation. This perceptual “direction repulsion” is useful for segmenting overlapping motion vectors. Here we investigate the effects of motion noise on the directional interaction between overlapping moving stimuli. Human subjects viewed two overlapping random-dot patches moving in different directions and judged the direction separation between the two motion vectors. We found that the perceived direction separation progressively changed from wide to narrow as the level of motion noise in the stimuli was increased, showing a switch from direction repulsion to attraction (i.e. smaller than the veridical direction separation). We also found that direction attraction occurred at a wider range of direction separations than direction repulsion. The normalized effects of both direction repulsion and attraction were the strongest near the direction separation of ∼25° and declined as the direction separation further increased. These results support the idea that motion noise prompts motion integration to overcome stimulus ambiguity. Our findings provide new constraints on neural models of motion transparency and segmentation.  相似文献   

9.
The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically “dry” stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1), and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2). A moving visual stimulus with invariant localization cues was generated by sequentially activating LED''s along the same radial path as the virtual auditory motion. Stimuli were presented at 25°/s, 50°/s and 100°/s with a random spatial offset between audition and vision. In a 2AFC task, subjects made a judgment of the leading modality (auditory or visual). No significant differences were observed in the spatial threshold based on the point of subjective equivalence (PSE) or the slope of psychometric functions (β) across all three acoustic conditions. Additionally, both the PSE and β did not significantly differ across velocity, suggesting a fixed spatial window of audio-visual separation. Findings suggest that there was no loss in spatial information accompanying the reduction in spatial cues and reverberation levels tested, and establish a perceptual measure for assessing the veracity of motion generated from discrete locations and in echoic environments.  相似文献   

10.
The fullerene molecule belongs to the so-called super materials. The compound is interesting due to its spherical configuration where atoms occupy positions forming a mechanically stable structure. We first demonstrate that pollen of Hibiscus rosa-sinensis has a strong symmetry regarding the distribution of its spines over the spherical grain. These spines form spherical hexagons and pentagons. The distance between atoms in fullerene is explained applying principles of flat, spherical, and spatial geometry, based on Euclid’s “Elements” book, as well as logic algorithms. Measurements of the pollen grain take into account that the true spine lengths, and consequently the real distances between them, are measured to the periphery of each grain. Algorithms are developed to recover the spatial effects lost in 2D photos. There is a clear correspondence between the position of atoms in the fullerene molecule and the position of spines in the pollen grain. In the fullerene the separation gives the idea of equal length bonds which implies perfectly distributed electron clouds while in the pollen grain we suggest that the spines being equally spaced carry an electrical charge originating in forces involved in the pollination process.  相似文献   

11.
We organize our behavior and store structured information with many procedures that require the coding of spatial and temporal order in specific neural modules. In the simplest cases, spatial and temporal relations are condensed in prepositions like “below” and “above”, “behind” and “in front of”, or “before” and “after”, etc. Neural operators lie beneath these words, sharing some similarities with logical gates that compute spatial and temporal asymmetric relations. We show how these operators can be modeled by means of neural matrix memories acting on Kronecker tensor products of vectors. The complexity of these memories is further enhanced by their ability to store episodes unfolding in space and time. How does the brain scale up from the raw plasticity of contingent episodic memories to the apparent stable connectivity of large neural networks? We clarify this transition by analyzing a model that flexibly codes episodic spatial and temporal structures into contextual markers capable of linking different memory modules.  相似文献   

12.
Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar’s position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina’s population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar’s position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.  相似文献   

13.
14.
People can perceive misfortunes as caused by previous bad deeds (immanent justice reasoning) or resulting in ultimate compensation (ultimate justice reasoning). Across two studies, we investigated the relation between these types of justice reasoning and identified the processes (perceptions of deservingness) that underlie them for both others (Study 1) and the self (Study 2). Study 1 demonstrated that observers engaged in more ultimate (vs. immanent) justice reasoning for a “good” victim and greater immanent (vs. ultimate) justice reasoning for a “bad” victim. In Study 2, participants'' construals of their bad breaks varied as a function of their self-worth, with greater ultimate (immanent) justice reasoning for participants with higher (lower) self-esteem. Across both studies, perceived deservingness of bad breaks or perceived deservingness of ultimate compensation mediated immanent and ultimate justice reasoning respectively.  相似文献   

15.
It is well documented that people would remunerate fair behaviours and penalize unfair behaviours. It is argued that individuals’ reactions following the receipt of a gift depend on the perceived intentions of the donors. Fair intentions should prompt positive affect, like gratitude, triggering cooperative behaviours; while intended unfairness should trigger negative affect, like anger, fostering anti-social actions. It is, however, contended that when people lack information to infer others’ intention they may use ‘normative’ beliefs about fairness - what a typical fair individual ‘should’ do in these circumstances – to guide their behaviour. In this experiment we examined this assertion. We had 122 participants play a one-shot, double-anonymous game with half playing as potential helpers (P1s) and half as recipients (P2s). Whether a participant was a P1 or P2 was chance-determined and all participants knew this. P1s decided whether to help P2s and whether to make their help unconditional (no repayment needed) or conditional (full or ‘taxed’ repayment). P2s decided whether to accept the offer and whatever conditions attached but were blind to the list of helping options available to P1s. We anticipated that recipients would refer to the ‘injunctive norm’ that ‘fair people should help “for free” when it is only by chance that they are in a position to help’. Therefore, without knowing P1s’ different helping options, unconditional offers should be rated by recipients as fairer than conditional offers, and this should be linked to greater gratitude with greater gratitude linked to greater reciprocation. Path analyses confirmed this serial mediation. The results showed that recipients of unconditional offers, compared to conditional ones, interpreted the helpers’ motives as more helpful, experienced greater gratitude and were more eager to reciprocate. The behavioural data further revealed that, when given a latter option to default, 38% of recipients of conditional offers did so.  相似文献   

16.
Dexterous manipulation relies on modulation of digit forces as a function of digit placement. However, little is known about the sense of position of the vertical distance between finger pads relative to each other. We quantified subjects'' ability to match perceived vertical distance between the thumb and index finger pads (dy) of the right hand (“reference” hand) using the same or opposite hand (“test” hand) after a 10-second delay without vision of the hands. The reference hand digits were passively placed non-collinearly so that the thumb was higher or lower than the index finger (dy = 30 or –30 mm, respectively) or collinearly (dy = 0 mm). Subjects reproduced reference hand dy by using a congruent or inverse test hand posture while exerting negligible digit forces onto a handle. We hypothesized that matching error (reference hand dy minus test hand dy) would be greater (a) for collinear than non-collinear dys, (b) when reference and test hand postures were not congruent, and (c) when subjects reproduced dy using the opposite hand. Our results confirmed our hypotheses. Under-estimation errors were produced when the postures of reference and test hand were not congruent, and when test hand was the opposite hand. These findings indicate that perceived finger pad distance is reproduced less accurately (1) with the opposite than the same hand and (2) when higher-level processing of the somatosensory feedback is required for non-congruent hand postures. We propose that erroneous sensing of finger pad distance, if not compensated for during contact and onset of manipulation, might lead to manipulation performance errors as digit forces have to be modulated to perceived digit placement.  相似文献   

17.
Perceived depth is conveyed by multiple cues, including binocular disparity and luminance shading. Depth perception from luminance shading information depends on the perceptual assumption for the incident light, which has been shown to default to a diffuse illumination assumption. We focus on the case of sinusoidally corrugated surfaces to ask how shading and disparity cues combine defined by the joint luminance gradients and intrinsic disparity modulation that would occur in viewing the physical corrugation of a uniform surface under diffuse illumination. Such surfaces were simulated with a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0’-20’. The observers’ task was to adjust the binocular disparity of a comparison random-dot stereogram surface to match the perceived depth of the joint luminance/disparity-modulated corrugation target. Regardless of target spatial frequency, the perceived target depth increased with the luminance contrast and depended on luminance phase but was largely unaffected by the luminance disparity modulation. These results validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. For depth judgments with combined cues, the observers gave much greater weighting to the luminance shading than to the disparity modulation of the targets. The results were not well-fit by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue in isolation. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios.  相似文献   

18.
We investigated the role of the visual eye-height (VEH) in the perception of affordance during short-term exposure to weightlessness. Sixteen participants were tested during parabolic flight (0g) and on the ground (1g). Participants looked at a laptop showing a room in which a doorway-like aperture was presented. They were asked to adjust the opening of the virtual doorway until it was perceived to be just wide enough to pass through (i.e., the critical aperture). We manipulated VEH by raising the level of the floor in the visual room by 25 cm. The results showed effects of VEH and of gravity on the perceived critical aperture. When VEH was reduced (i.e., when the floor was raised), the critical aperture diminished, suggesting that widths relative to the body were perceived to be larger. The critical aperture was also lower in 0g, for a given VEH, suggesting that participants perceived apertures to be wider or themselves to be smaller in weightlessness, as compared to normal gravity. However, weightlessness also had an effect on the subjective level of the eyes projected into the visual scene. Thus, setting the critical aperture as a fixed percentage of the subjective visual eye-height remains a viable hypothesis to explain how human observers judge visual scenes in terms of potential for action or “affordances”.  相似文献   

19.
This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as “differences that make a difference” within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes.  相似文献   

20.
Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball''s distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants'' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号