首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Perception relies on the response of populations of neurons in sensory cortex. How the response profile of a neuronal population gives rise to perception and perceptual discrimination has been conceptualized in various ways. Here we suggest that neuronal population responses represent information about our environment explicitly as Fisher information (FI), which is a local measure of the variance estimate of the sensory input. We show how this sensory information can be read out and combined to infer from the available information profile which stimulus value is perceived during a fine discrimination task. In particular, we propose that the perceived stimulus corresponds to the stimulus value that leads to the same information for each of the alternative directions, and compare the model prediction to standard models considered in the literature (population vector, maximum likelihood, maximum-a-posteriori Bayesian inference). The models are applied to human performance in a motion discrimination task that induces perceptual misjudgements of a target direction of motion by task irrelevant motion in the spatial surround of the target stimulus (motion repulsion). By using the neurophysiological insight that surround motion suppresses neuronal responses to the target motion in the center, all models predicted the pattern of perceptual misjudgements. The variation of discrimination thresholds (error on the perceived value) was also explained through the changes of the total FI content with varying surround motion directions. The proposed FI decoding scheme incorporates recent neurophysiological evidence from macaque visual cortex showing that perceptual decisions do not rely on the most active neurons, but rather on the most informative neuronal responses. We statistically compare the prediction capability of the FI decoding approach and the standard decoding models. Notably, all models reproduced the variation of the perceived stimulus values for different surrounds, but with different neuronal tuning characteristics underlying perception. Compared to the FI approach the prediction power of the standard models was based on neurons with far wider tuning width and stronger surround suppression. Our study demonstrates that perceptual misjudgements can be based on neuronal populations encoding explicitly the available sensory information, and provides testable neurophysiological predictions on neuronal tuning characteristics underlying human perceptual decisions.  相似文献   

2.
Perception can change nonlinearly with stimulus contrast, and perceptual threshold may depend on the direction of contrast change. Such hysteresis effects in neurometric functions provide a signature of perceptual awareness. We recorded brain activity with functional neuroimaging in observers exposed to gradual contrast changes of initially hidden visual stimuli. Lateral occipital, frontal, and parietal regions all displayed both transient activations and hysteresis that correlated with change and maintenance of a percept, respectively. Medial temporal activity did not follow perception but increased during hysteresis and showed transient deactivations during perceptual transitions. These findings identify a set of brain regions sensitive to visual awareness and suggest that medial temporal structures may provide backward signals that account for neural and, thereby, perceptual hysteresis.  相似文献   

3.
Human exhibits an anisotropy in direction perception: discrimination is superior when motion is around horizontal or vertical rather than diagonal axes. In contrast to the consistent directional anisotropy in perception, we found only small idiosyncratic anisotropies in smooth pursuit eye movements, a motor action requiring accurate discrimination of visual motion direction. Both pursuit and perceptual direction discrimination rely on signals from the middle temporal visual area (MT), yet analysis of multiple measures of MT neuronal responses in the macaque failed to provide evidence of a directional anisotropy. We conclude that MT represents different motion directions uniformly, and subsequent processing creates a directional anisotropy in pathways unique to perception. Our data support the hypothesis that, at least for visual motion, perception and action are guided by inputs from separate sensory streams. The directional anisotropy of perception appears to originate after the two streams have segregated and downstream from area MT.  相似文献   

4.
BACKGROUND: When simultaneous visual events appear to occur at different times, the discrepancy has generally been ascribed to time differences in neural transmission or cortical processing that lead to asynchronous awareness of the events. RESULTS: We found, however, that an apparent delay of changes in motion direction relative to synchronous color changes occurs only for rapid alternations, and this delay is not accompanied by a difference in reaction time. We also found that perceptual asynchrony depends on the temporal structure of the stimuli (transitions [first-order temporal change] versus turning points [second-order temporal change]) rather than the attribute type (color versus motion). CONCLUSIONS: We propose that the perception of the relative time of events is based on the relationship of representations of temporal pattern that we term time markers. We conclude that the perceptual asynchrony effects studied here do not reflect differential neural delays for different attributes; rather, they arise from a faulty correspondence match between color transitions and position transitions (motion), which in turn results from a difficulty in detecting turning points (direction reversals) and a preference for matching markers of the same type.  相似文献   

5.
Saygin AP  Cook J  Blakemore SJ 《PloS one》2010,5(10):e13491

Background

Perception of biological motion is linked to the action perception system in the human brain, abnormalities within which have been suggested to underlie impairments in social domains observed in autism spectrum conditions (ASC). However, the literature on biological motion perception in ASC is heterogeneous and it is unclear whether deficits are specific to biological motion, or might generalize to form-from-motion perception.

Methodology and Principal Findings

We compared psychophysical thresholds for both biological and non-biological form-from-motion perception in adults with ASC and controls. Participants viewed point-light displays depicting a walking person (Biological Motion), a translating rectangle (Structured Object) or a translating unfamiliar shape (Unstructured Object). The figures were embedded in noise dots that moved similarly and the task was to determine direction of movement. The number of noise dots varied on each trial and perceptual thresholds were estimated adaptively. We found no evidence for an impairment in biological or non-biological object motion perception in individuals with ASC. Perceptual thresholds in the three conditions were almost identical between the ASC and control groups.

Discussion and Conclusions

Impairments in biological motion and non-biological form-from-motion perception are not across the board in ASC, and are only found for some stimuli and tasks. We discuss our results in relation to other findings in the literature, the heterogeneity of which likely relates to the different tasks performed. It appears that individuals with ASC are unaffected in perceptual processing of form-from-motion, but may exhibit impairments in higher order judgments such as emotion processing. It is important to identify more specifically which processes of motion perception are impacted in ASC before a link can be made between perceptual deficits and the higher-level features of the disorder.  相似文献   

6.

Background

The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.

Methodology/Principal Findings

Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.

Conclusions/Significance

This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event.  相似文献   

7.
Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.  相似文献   

8.
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.  相似文献   

9.
When studying animal perception, one normally has the chance of localizing perceptual events in time, that is via behavioural responses time-locked to the stimuli. With multistable stimuli, however, perceptual changes occur despite stationary stimulation. Here, the challenge is to infer these not directly observable perceptual states indirectly from the behavioural data. This estimation is complicated by the fact that an animal's performance is contaminated by errors. We propose a two-step approach to overcome this difficulty: First, one sets up a generative, stochastic model of the behavioural time series based on the relevant parameters, including the probability of errors. Second, one performs a model-based maximum-likelihood estimation on the data in order to extract the non-observable perceptual state transitions. We illustrate this methodology for data from experiments on perception of bistable apparent motion in pigeons. The observed behavioural time series is analysed and explained by a combination of a Markovian perceptual dynamics with a renewal process that governs the motor response. We propose a hidden Markov model in which non-observable states represent both the perceptual states and the states of the renewal process of the motor dynamics, while the observable states account for overt pecking performance. Showing that this constitutes an appropriate phenomenological model of the time series of observable pecking events, we use it subsequently to obtain an estimate of the internal (and thus covert) perceptual reversals. These may directly correspond to changes in the activity of mutually inhibitory populations of motion selective neurones tuned to orthogonal directions.  相似文献   

10.
Accurate motion perception of self and object speed is crucial for successful interaction in the world. The context in which we make such speed judgments has a profound effect on their accuracy. Misperceptions of motion speed caused by the context can have drastic consequences in real world situations, but they also reveal much about the underlying mechanisms of motion perception. Here we show that motion signals suppressed from awareness can warp simultaneous conscious speed perception. In Experiment 1, we measured global speed discrimination thresholds using an annulus of 8 local Gabor elements. We show that physically removing local elements from the array attenuated global speed discrimination. However, removing awareness of the local elements only had a small effect on speed discrimination. That is, unconscious local motion elements contributed to global conscious speed perception. In Experiment 2 we measured the global speed of the moving Gabor patterns, when half the elements moved at different speeds. We show that global speed averaging occurred regardless of whether local elements were removed from awareness, such that the speed of invisible elements continued to be averaged together with the visible elements to determine the global speed. These data suggest that contextual motion signals outside of awareness can both boost and affect our experience of motion speed, and suggest that such pooling of motion signals occurs before the conscious extraction of the surround motion speed.  相似文献   

11.
Kim J  Park S  Blake R 《PloS one》2011,6(5):e19971

Background

Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits.

Methodology/Findings

In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task.Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion.

Conclusion

Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes.  相似文献   

12.
When visual input is inconclusive, does previous experience aid the visual system in attaining an accurate perceptual interpretation? Prolonged viewing of a visually ambiguous stimulus causes perception to alternate between conflicting interpretations. When viewed intermittently, however, ambiguous stimuli tend to evoke the same percept on many consecutive presentations. This perceptual stabilization has been suggested to reflect persistence of the most recent percept throughout the blank that separates two presentations. Here we show that the memory trace that causes stabilization reflects not just the latest percept, but perception during a much longer period. That is, the choice between competing percepts at stimulus reappearance is determined by an elaborate history of prior perception. Specifically, we demonstrate a seconds-long influence of the latest percept, as well as a more persistent influence based on the relative proportion of dominance during a preceding period of at least one minute. In case short-term perceptual history and long-term perceptual history are opposed (because perception has recently switched after prolonged stabilization), the long-term influence recovers after the effect of the latest percept has worn off, indicating independence between time scales. We accommodate these results by adding two positive adaptation terms, one with a short time constant and one with a long time constant, to a standard model of perceptual switching.  相似文献   

13.
Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs.  相似文献   

14.
In the primate visual cortex, neurons signal differences in the appearance of objects with high precision. However, not all activated neurons contribute directly to perception. We defined the perceptual pool in extrastriate visual area V5/MT for a stereo-motion task, based on trial-by-trial co-variation between perceptual decisions and neuronal firing (choice probability (CP)). Macaque monkeys were trained to discriminate the direction of rotation of a cylinder, using the binocular depth between the moving dots that form its front and rear surfaces. We manipulated the activity of single neurons trial-to-trial by introducing task-irrelevant stimulus changes: dot motion in cylinders was aligned with neuronal preference on only half the trials, so that neurons were strongly activated with high firing rates on some trials and considerably less activated on others. We show that single neurons maintain high neurometric sensitivity for binocular depth in the face of substantial changes in firing rate. CP was correlated with neurometric sensitivity, not level of activation. In contrast, for individual neurons, the correlation between perceptual choice and neuronal activity may be fundamentally different when responding to different stimulus versions. Therefore, neuronal pools supporting sensory discrimination must be structured flexibly and independently for each stimulus configuration to be discriminated.This article is part of the themed issue ‘Vision in our three-dimensional world''.  相似文献   

15.
The middle temporal area of the extrastriate visual cortex (area MT) is integral to motion perception and is thought to play a key role in the perceptual learning of motion tasks. We have previously found, however, that perceptual learning of a motion discrimination task is possible even when the training stimulus contains locally balanced, motion opponent signals that putatively suppress the response of MT. Assuming at least partial suppression of MT, possible explanations for this learning are that 1) training made MT more responsive by reducing motion opponency, 2) MT remained suppressed and alternative visual areas such as V1 enabled learning and/or 3) suppression of MT increased with training, possibly to reduce noise. Here we used fMRI to test these possibilities. We first confirmed that the motion opponent stimulus did indeed suppress the BOLD response within hMT+ compared to an almost identical stimulus without locally balanced motion signals. We then trained participants on motion opponent or non-opponent stimuli. Training with the motion opponent stimulus reduced the BOLD response within hMT+ and greater reductions in BOLD response were correlated with greater amounts of learning. The opposite relationship between BOLD and behaviour was found at V1 for the group trained on the motion-opponent stimulus and at both V1 and hMT+ for the group trained on the non-opponent motion stimulus. As the average response of many cells within MT to motion opponent stimuli is the same as their response to non-directional flickering noise, the reduced activation of hMT+ after training may reflect noise reduction.  相似文献   

16.
Palmer and Rock proposed that uniform connectedness (UC) occurs prior to classical Gestalt factors to define the primitive units for visual perception. Han, Humphreys and Chen, however, found that grouping by proximity can take place as quickly as that based on UC in a letter discrimination task. The present study employed a letter detection task to examine the relationship between UC and proximity grouping in 3 experiments. We showed that reaction times to targets defined by proximity or UC were equally fast when one or two global objects were presented in the visual field. However, as the number of global objects was increased, responses were faster to targets defined by UC than to targets defined by proximity. In addition, the advantage of UC over proximity was not affected by the space between global objects. The results suggest that UC was more effective than proximity in forming perceptual units under multiple object conditions. Possible reasons for this finding are discussed.  相似文献   

17.
When dealing with natural scenes, sensory systems have to process an often messy and ambiguous flow of information. A stable perceptual organization nevertheless has to be achieved in order to guide behavior. The neural mechanisms involved can be highlighted by intrinsically ambiguous situations. In such cases, bistable perception occurs: distinct interpretations of the unchanging stimulus alternate spontaneously in the mind of the observer. Bistable stimuli have been used extensively for more than two centuries to study visual perception. Here we demonstrate that bistable perception also occurs in the auditory modality. We compared the temporal dynamics of percept alternations observed during auditory streaming with those observed for visual plaids and the susceptibilities of both modalities to volitional control. Strong similarities indicate that auditory and visual alternations share common principles of perceptual bistability. The absence of correlation across modalities for subject-specific biases, however, suggests that these common principles are implemented at least partly independently across sensory modalities. We propose that visual and auditory perceptual organization could rely on distributed but functionally similar neural competition mechanisms aimed at resolving sensory ambiguities.  相似文献   

18.
Vision in Autism Spectrum Conditions (ASC) is characterized by enhanced perception of local elements, but impaired perception of global percepts. Deficits in coherent motion perception seem to support this characterization, but the roots and robustness of such deficits remain unclear. We aimed to investigate the dynamics of the perceptual decision-making network known to support coherent motion perception. In a series of forced-choice coherent motion perception tests, we parametrically varied a single stimulus dimension, viewing duration, to test whether the rate at which evidence is accumulated towards a global decision is atypical in ASC. 40 adult participants (20 ASC) performed a classic motion discrimination task, manually indicating the global direction of motion in a random-dot kinematogram across a range of coherence levels (2–75%) and stimulus-viewing durations (200–1500 ms). We report a deficit in global motion perception at short viewing durations in ASC. Critically, however, we found that increasing the amount of time over which motion signals could be integrated reduced the magnitude of the deficit, such that at the longest duration there was no difference between the ASC and control groups. Further, the deficit in motion integration at the shortest duration was significantly associated with the severity of autistic symptoms in our clinical population, and was independent from measures of intelligence. These results point to atypical integration of motion signals during the construction of a global percept in ASC. Based on the neural correlates of decision-making in global motion perception our findings suggest the global motion deficit observed in ASC could reflect a slower or more variable response from the primary motion area of the brain or longer accumulation of evidence towards a decision-bound in parietal areas.  相似文献   

19.
When the left and the right eye are simultaneously presented with incompatible images at overlapping retinal locations, an observer typically reports perceiving only one of the two images at a time. This phenomenon is called binocular rivalry. Perception during binocular rivalry is not stable; one of the images is perceptually dominant for a certain duration (typically in the order of a few seconds) after which perception switches towards the other image. This alternation between perceptual dominance and suppression will continue for as long the images are presented. A characteristic of binocular rivalry is that a perceptual transition from one image to the other generally occurs in a gradual manner: the image that was temporarily suppressed will regain perceptual dominance at isolated locations within the perceived image, after which its visibility spreads throughout the whole image. These gradual transitions from perceptual suppression to perceptual dominance have been labeled as traveling waves of perceptual dominance. In this study we investigate whether stimulus parameters affect the location at which a traveling wave starts. We varied the contrast, spatial frequency or motion speed in one of the rivaling images, while keeping the same parameter constant in the other image. We used a flash-suppression paradigm to force one of the rival images into perceptual suppression. Observers waited until the suppressed image became perceptually dominant again, and indicated the position at which this breakthrough from suppression occurred. Our results show that the starting point of a traveling wave during binocular rivalry is highly dependent on local stimulus parameters. More specifically, a traveling wave most likely started at the location where the contrast of the suppressed image was higher than that of the dominant one, the spatial frequency of the suppressed image was lower than that of the dominant one, and the motion speed of the suppressed image was higher than that of the dominant one. We suggest that a breakthrough from suppression to dominance occurs at the location where salience (the degree to which a stimulus element stands out relative to neighboring elements) of the suppressed image is higher than that of the dominant one. Our results further show that stimulus parameters affecting the temporal dynamics during continuous viewing of rival images described in other studies, also affect the spatial origin of traveling waves during binocular rivalry.  相似文献   

20.
1 Introduction The visual world is composed of complex visual scenes that are projected, as two-dimen- sional images, onto the retina. Chunking of visual information is critical for object recognition, because it produces primitive perceptual units for subsequent analyses[1]. Integration of discrete local elements into a global configuration is one of the functions of perceptual grouping (e.g., combining local rectangles into a global letter as shown in fig. 1(b)). When multiple global object…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号