首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Visual perception is usually stable and accurate. However, when the two eyes are simultaneously presented with conflicting stimuli, perception falls into a sequence of spontaneous alternations, switching between one stimulus and the other every few seconds. Known as binocular rivalry, this visual illusion decouples subjective experience from physical stimulation and provides a unique opportunity to study the neural correlates of consciousness. The temporal properties of this alternating perception have been intensively investigated for decades, yet the relationship between two fundamental properties - the sequence of percepts and the duration of each percept - remains largely unexplored.

Methodology/Principal Findings

Here we examine the relationship between the percept sequence and the percept duration by quantifying their sensitivity to the strength imbalance between two monocular stimuli. We found that the percept sequence is far more susceptible to the stimulus imbalance than does the percept duration. The percept sequence always begins with the stronger stimulus, even when the stimulus imbalance is too weak to cause a significant bias in the percept duration. Therefore, introducing a small stimulus imbalance affects the percept sequence, whereas increasing the imbalance affects the percept duration, but not vice versa. To investigate why the percept sequence is so vulnerable to the stimulus imbalance, we further measured the interval between the stimulus onset and the first percept, during which subjects experienced the fusion of two monocular stimuli. We found that this interval is dramatically shortened with increased stimulus imbalance.

Conclusions/Significance

Our study shows that in binocular rivalry, the strength imblanace between monocular stimuli has a much greater impact on the percept sequence than on the percept duration, and increasing this imbalance can accelerate the process responsible for the percept sequence.  相似文献   

2.
Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal simultaneity perception and improves temporal discrimination in audiovisual processing.  相似文献   

3.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

4.
A neural network which models multistable perception is presented. The network consists of sensor and inner neurons. The dynamics is established by a stochastic neuronal dynamics, a formal Hebb-type coupling dynamics and a resource mechanism that corresponds to saturation effects in perception. From this a system of coupled differential equations is derived and analyzed. Single stimuli are bound to exactly one percept, even in ambiguous situations where multistability occurs. The network exhibits discontinuous as well as continuous phase transitions and models various empirical findings, including the percepts of succession, alternative motion and simultaneity; the percept of oscillation is explained by oscillating percepts at a continuous phase transition. Received: 13 September 1995 / Accepted: 3 June 1996  相似文献   

5.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

6.
Town SM  McCabe BJ 《PloS one》2011,6(3):e17777
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.  相似文献   

7.
When visual input is inconclusive, does previous experience aid the visual system in attaining an accurate perceptual interpretation? Prolonged viewing of a visually ambiguous stimulus causes perception to alternate between conflicting interpretations. When viewed intermittently, however, ambiguous stimuli tend to evoke the same percept on many consecutive presentations. This perceptual stabilization has been suggested to reflect persistence of the most recent percept throughout the blank that separates two presentations. Here we show that the memory trace that causes stabilization reflects not just the latest percept, but perception during a much longer period. That is, the choice between competing percepts at stimulus reappearance is determined by an elaborate history of prior perception. Specifically, we demonstrate a seconds-long influence of the latest percept, as well as a more persistent influence based on the relative proportion of dominance during a preceding period of at least one minute. In case short-term perceptual history and long-term perceptual history are opposed (because perception has recently switched after prolonged stabilization), the long-term influence recovers after the effect of the latest percept has worn off, indicating independence between time scales. We accommodate these results by adding two positive adaptation terms, one with a short time constant and one with a long time constant, to a standard model of perceptual switching.  相似文献   

8.
Temporal aspects of the perceptual integration of audiovisual information were investigated by utilizing the visual "streaming-bouncing" phenomenon. When two identical visual objects move towards each other, coincide, and then move away from each other, the objects can either be seen as streaming past one another or bouncing off each other. Although the streaming percept is dominant, the bouncing percept can be induced by presenting an auditory stimulus during the visual coincidence of the moving objects. Here we show that the bounce-inducing effect of the auditory stimulus is paramount when its onset and offset occur in temporal proximity of the onset and offset of the period of visual coincidence of the moving objects. When the duration of the auditory stimulus exceeded this period, visual bouncing disappears. Implications for a temporal window of audiovisual integration and the design of effective audiovisual warning signals are discussed.  相似文献   

9.
Wile D  Balaban E 《PloS one》2007,2(4):e369
Current theories of auditory pitch perception propose that cochlear place (spectral) and activity timing pattern (temporal) information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, "missing fundamental" pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope), and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places) carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.  相似文献   

10.

Background

The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.

Methodology/Principal Findings

Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.

Conclusions/Significance

This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event.  相似文献   

11.
de Jong MC  Knapen T  van Ee R 《PloS one》2012,7(1):e30595
Observers continually make unconscious inferences about the state of the world based on ambiguous sensory information. This process of perceptual decision-making may be optimized by learning from experience. We investigated the influence of previous perceptual experience on the interpretation of ambiguous visual information. Observers were pre-exposed to a perceptually stabilized sequence of an ambiguous structure-from-motion stimulus by means of intermittent presentation. At the subsequent re-appearance of the same ambiguous stimulus perception was initially biased toward the previously stabilized perceptual interpretation. However, prolonged viewing revealed a bias toward the alternative perceptual interpretation. The prevalence of the alternative percept during ongoing viewing was largely due to increased durations of this percept, as there was no reliable decrease in the durations of the pre-exposed percept. Moreover, the duration of the alternative percept was modulated by the specific characteristics of the pre-exposure, whereas the durations of the pre-exposed percept were not. The increase in duration of the alternative percept was larger when the pre-exposure had lasted longer and was larger after ambiguous pre-exposure than after unambiguous pre-exposure. Using a binocular rivalry stimulus we found analogous perceptual biases, while pre-exposure did not affect eye-bias. We conclude that previously perceived interpretations dominate at the onset of ambiguous sensory information, whereas alternative interpretations dominate prolonged viewing. Thus, at first instance ambiguous information seems to be judged using familiar percepts, while re-evaluation later on allows for alternative interpretations.  相似文献   

12.
Jessen S  Obleser J  Kotz SA 《PloS one》2012,7(4):e36070
Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta-band oscillations (15-25 Hz) primarily reflecting biological motion perception was modulated 200-400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing.  相似文献   

13.
The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible--adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects.  相似文献   

14.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

15.
Visible persistence refers to the continuation of visual perception after the physical termination of a stimulus. We studied an extreme case of visible persistence by presenting two matrices of randomly distributed black and white pixels in succession. On the transition from one matrix to the second, the luminance polarity of all pixels within a disk- or annulus-shaped area reversed, physically creating a single second-order transient signal. This transient signal produces the percept of a disk or an annulus with an abrupt onset and a gradual offset. To study the nature of this fading percept we varied spatial parameters, such as the inner and the outer diameter of annuli (Experiment I) and the radius and eccentricity of disks (Experiment III), and measured the duration of visible persistence by having subjects adjust the synchrony of the onset of a reference stimulus with the onset or the offset of the fading percept. We validated this method by comparing two modalities of the reference stimuli (Experiment I) and by comparing the judgments of fading percepts with the judgments of stimuli that actually fade in luminance contrast (Experiment II). The results show that (i) irrespective of the reference modality, participants are able to precisely judge the on- and the offsets of the fading percepts, (ii) auditory reference stimuli lead to higher visible persistence durations than visual ones, (iii) visible persistence duration increases with the thickness of annuli and the diameter of disks, but decreases with the diameter of annuli, irrespective of stimulus eccentricity. These effects cannot be explained by stimulus energy, which suggests that more complex processing mechanisms are involved. Seemingly contradictory effects of disk and annulus diameter can be unified by assuming an abstract filling-in mechanism that speeds up with the strength of the edge signal and takes more time the larger the stimulus area is.  相似文献   

16.
Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it''s been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.  相似文献   

17.
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands.  相似文献   

18.
Rivalry is a common tool to probe visual awareness: a constant physical stimulus evokes multiple, distinct perceptual interpretations ("percepts") that alternate over time. Percepts are typically described as mutually exclusive, suggesting that a discrete (all-or-none) process underlies changes in visual awareness. Here we follow two strategies to address whether rivalry is an all-or-none process: first, we introduce two reflexes as objective measures of rivalry, pupil dilation and optokinetic nystagmus (OKN); second, we use a continuous input device (analog joystick) to allow observers a gradual subjective report. We find that the "reflexes" reflect the percept rather than the physical stimulus. Both reflexes show a gradual dependence on the time relative to perceptual transitions. Similarly, observers' joystick deflections, which are highly correlated with the reflex measures, indicate gradual transitions. Physically simulating wave-like transitions between percepts suggest piece-meal rivalry (i.e., different regions of space belonging to distinct percepts) as one possible explanation for the gradual transitions. Furthermore, the reflexes show that dominance durations depend on whether or not the percept is actively reported. In addition, reflexes respond to transitions with shorter latencies than the subjective report and show an abundance of short dominance durations. This failure to report fast changes in dominance may result from limited access of introspection to rivalry dynamics. In sum, reflexes reveal that rivalry is a gradual process, rivalry's dynamics is modulated by the required action (response mode), and that rapid transitions in perceptual dominance can slip away from awareness.  相似文献   

19.
In order to maintain a coherent, unified percept of the external environment, the brain must continuously combine information encoded by our different sensory systems. Contemporary models suggest that multisensory integration produces a weighted average of sensory estimates, where the contribution of each system to the ultimate multisensory percept is governed by the relative reliability of the information it provides (maximum-likelihood estimation). In the present study, we investigate interactions between auditory and visual rate perception, where observers are required to make judgments in one modality while ignoring conflicting rate information presented in the other. We show a gradual transition between partial cue integration and complete cue segregation with increasing inter-modal discrepancy that is inconsistent with mandatory implementation of maximum-likelihood estimation. To explain these findings, we implement a simple Bayesian model of integration that is also able to predict observer performance with novel stimuli. The model assumes that the brain takes into account prior knowledge about the correspondence between auditory and visual rate signals, when determining the degree of integration to implement. This provides a strategy for balancing the benefits accrued by integrating sensory estimates arising from a common source, against the costs of conflating information relating to independent objects or events.  相似文献   

20.
Currently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14–30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号