首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

When viewing complex scenes, East Asians attend more to contexts whereas Westerners attend more to objects, reflecting cultural differences in holistic and analytic visual processing styles respectively. This eye-tracking study investigated more specific mechanisms and the robustness of these cultural biases in visual processing when salient changes in the objects and backgrounds occur in complex pictures.

Methodology/Principal Findings

Chinese Singaporean (East Asian) and Caucasian US (Western) participants passively viewed pictures containing selectively changing objects and background scenes that strongly captured participants'' attention in a data-driven manner. We found that although participants from both groups responded to object changes in the pictures, there was still evidence for cultural divergence in eye-movements. The number of object fixations in the US participants was more affected by object change than in the Singapore participants. Additionally, despite the picture manipulations, US participants consistently maintained longer durations for both object and background fixations, with eye-movements that generally remained within the focal objects. In contrast, Singapore participants had shorter fixation durations with eye-movements that alternated more between objects and backgrounds.

Conclusions/Significance

The results demonstrate a robust cultural bias in visual processing even when external stimuli draw attention in an opposite manner to the cultural bias. These findings also extend previous studies by revealing more specific, but consistent, effects of culture on the different aspects of visual attention as measured by fixation duration, number of fixations, and saccades between objects and backgrounds.  相似文献   

2.
Balkenius A  Hansson B 《PloS one》2012,7(4):e32133

Background

The mushroom bodies of the insect brain play an important role in olfactory processing, associative learning and memory. The mushroom bodies show odor-specific spatial patterns of activity and are also influenced by visual stimuli.

Methodology/Principal Findings

Functional imaging was used to investigate changes in the in vivo responses of the mushroom body of the hawkmoth Manduca sexta during multimodal discrimination training. A visual and an odour stimulus were presented either together or individually. Initially, mushroom body activation patterns were identical to the odour stimulus and the multimodal stimulus. After training, however, the mushroom body response to the rewarded multimodal stimulus was significantly lower than the response to the unrewarded unimodal odour stimulus, indicating that the coding of the stimuli had changed as a result of training. The opposite pattern was seen when only the unimodal odour stimulus was rewarded. In this case, the mushroom body was more strongly activated by the multimodal stimuli after training. When no stimuli were rewarded, the mushroom body activity decreased for both the multimodal and unimodal odour stimuli. There was no measurable response to the unimodal visual stimulus in any of the experiments. These results can be explained using a connectionist model where the mushroom body is assumed to be excited by olfactory stimulus components, and suppressed by multimodal configurations.

Conclusions

Discrimination training with multimodal stimuli consisting of visual and odour cues leads to stimulus specific changes in the in vivo responses of the mushroom body of the hawkmoth.  相似文献   

3.

Background

The sound-induced flash illusion is an auditory-visual illusion – when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.

Methodology/Principal Findings

The main aim of this study was to investigate the importance of spatial congruence in the flash-beep illusion. Selected combinations of one to four short flashes and zero to four short 3.5 KHz tones were presented. Observers were asked to count the number of flashes they saw. After replication of the basic illusion using centrally-presented stimuli, the auditory and visual components of the illusion stimuli were presented either both 10 degrees to the left or right of fixation (spatially congruent) or on opposite (spatially incongruent) sides, for a total separation of 20 degrees.

Conclusions/Significance

The sound-induced flash fission illusion was successfully replicated. However, when the sources of the auditory and visual stimuli were spatially separated, perception of the illusion was unaffected, suggesting that the “spatial rule” does not extend to describing behavioural responses in this illusion. We also find no evidence for an associated “fusion” illusion reportedly occurring when multiple flashes are accompanied by a single beep.  相似文献   

4.

Background

In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays.

Methodology/Principal Findings

In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods.

Conclusions/Significance

The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors.  相似文献   

5.

Background

Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used.

Methodology/Principal Findings

We tested this hypothesis by scanning healthy human participants'' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants'' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position.

Conclusions/Significance

These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use.  相似文献   

6.

Background

Selective visual attention is the process by which the visual system enhances behaviorally relevant stimuli and filters out others. Visual attention is thought to operate through a cortical mechanism known as biased competition. Representations of stimuli within cortical visual areas compete such that they mutually suppress each others'' neural response. Competition increases with stimulus proximity and can be biased in favor of one stimulus (over another) as a function of stimulus significance, salience, or expectancy. Though there is considerable evidence of biased competition within the human visual system, the dynamics of the process remain unknown.

Methodology/Principal Findings

Here, we used scalp-recorded electroencephalography (EEG) to examine neural correlates of biased competition in the human visual system. In two experiments, subjects performed a task requiring them to either simultaneously identify two targets (Experiment 1) or discriminate one target while ignoring a decoy (Experiment 2). Competition was manipulated by altering the spatial separation between target(s) and/or decoy. Both experimental tasks should induce competition between stimuli. However, only the task of Experiment 2 should invoke a strong bias in favor of the target (over the decoy). The amplitude of two lateralized components of the event-related potential, the N2pc and Ptc, mirrored these predictions. N2pc amplitude increased with increasing stimulus separation in Experiments 1 and 2. However, Ptc amplitude varied only in Experiment 2, becoming more positive with decreased spatial separation.

Conclusions/Significance

These results suggest that N2pc and Ptc components may index distinct processes of biased competition—N2pc reflecting visual competitive interactions and Ptc reflecting a bias in processing necessary to individuate task-relevant stimuli.  相似文献   

7.

Background

Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.

Methodology/Principal Findings

A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.

Conclusions/Significance

We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.  相似文献   

8.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

9.

Background

Photosensitive epilepsy is a type of reflexive epilepsy triggered by various visual stimuli including colourful ones. Despite the ubiquitous presence of colorful displays, brain responses against different colour combinations are not properly studied.

Methodology/Principal Findings

Here, we studied the photosensitivity of the human brain against three types of chromatic flickering stimuli by recording neuromagnetic brain responses (magnetoencephalogram, MEG) from nine adult controls, an unmedicated patient, a medicated patient, and two controls age-matched with patients. Dynamical complexities of MEG signals were investigated by a family of wavelet entropies. Wavelet entropy is a newly proposed measure to characterize large scale brain responses, which quantifies the degree of order/disorder associated with a multi-frequency signal response. In particular, we found that as compared to the unmedicated patient, controls showed significantly larger wavelet entropy values. We also found that Renyi entropy is the most powerful feature for the participant classification. Finally, we also demonstrated the effect of combinational chromatic sensitivity on the underlying order/disorder in MEG signals.

Conclusions/Significance

Our results suggest that when perturbed by potentially epileptic-triggering stimulus, healthy human brain manages to maintain a non-deterministic, possibly nonlinear state, with high degree of disorder, but an epileptic brain represents a highly ordered state which making it prone to hyper-excitation. Further, certain colour combination was found to be more threatening than other combinations.  相似文献   

10.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

11.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

12.

Background

Temporal visual processing is strongly deteriorated in patients with schizophrenia. For example, the interval required between a visual stimulus and a subsequent mask has to be much longer in schizophrenic patients than in healthy controls. We investigated whether this deficit in temporal resolution is accompanied by prolonged visual persistence and/or deficient temporal precision (temporal asynchrony perception).

Methodology/Principal Findings

We investigated visual persistence in three experiments. In the first, measuring temporal processing by so-called backward masking, prolonged visible persistence is supposed to decrease performance. In the second experiment, requiring temporal integration, prolonged persistence is supposed to improve performance. In the third experiment, we investigated asynchrony detection, as another measure of temporal resolution. Eighteen patients with schizophrenia and 15 healthy controls participated. Asynchrony detection was intact in the patients. However, patients'' performance was inferior compared to healthy controls in the first two experiments. Hence, temporal processing in schizophrenic patients is indeed significantly impaired but this impairment is not caused by prolonged temporal integration.

Conclusions/Significance

Our results argue against a generally prolonged visual persistence in patients with schizophrenia. Together with the preserved ability of patients, to detect temporal asynchronies in permanently presented stimuli, the results indicate a more specific deficit in temporal processing of schizophrenic patients.  相似文献   

13.

Background

Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset.

Methodology

Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz), while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz), such that the restored fundamental (also knows as “virtual pitch”) changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component.

Principal Findings

We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch) counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies.

Conclusions

Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.  相似文献   

14.

Background

In ecological situations, threatening stimuli often come out from the peripheral vision. Such aggressive messages must trigger rapid attention to the periphery to allow a fast and adapted motor reaction. Several clues converge to hypothesize that peripheral danger presentation can trigger off a fast arousal network potentially independent of the consciousness spot.

Methodology/Principal Findings

In the present MEG study, spatio-temporal dynamics of the neural processing of danger related stimuli were explored as a function of the stimuli position in the visual field. Fearful and neutral faces were briefly presented in the central or peripheral visual field, and were followed by target faces stimuli. An event-related beamformer source analysis model was applied in three time windows following the first face presentations: 80 to 130 ms, 140 to 190 ms, and 210 to 260 ms. The frontal lobe and the right internal temporal lobe part, including the amygdala, reacted as soon as 80 ms of latency to fear occurring in the peripheral vision. For central presentation, fearful faces evoked the classical neuronal activity along the occipito-temporal visual pathway between 140 and 190 ms.

Conclusions

Thus, the high spatio-temporal resolution of MEG allowed disclosing a fast response of a network involving medial temporal and frontal structures in the processing of fear related stimuli occurring unconsciously in the peripheral visual field. Whereas centrally presented stimuli are precisely processed by the ventral occipito-temporal cortex, the related-to-danger stimuli appearing in the peripheral visual field are more efficient to produce a fast automatic alert response possibly conveyed by subcortical structures.  相似文献   

15.

Background

Subjective duration is strongly influenced by repetition and novelty, such that an oddball stimulus in a stream of repeated stimuli appears to last longer in duration in comparison. We hypothesize that this duration illusion, called the temporal oddball effect, is a result of the difference in expectation between the oddball and the repeated stimuli. Specifically, we conjecture that the repeated stimuli contract in duration as a result of increased predictability; these duration contractions, we suggest, result from decreased neural response amplitude with repetition, known as repetition suppression.

Methodology/Principal Findings

Participants viewed trials consisting of lines presented at a particular orientation (standard stimuli) followed by a line presented at a different orientation (oddball stimulus). We found that the size of the oddball effect correlates with the number of repetitions of the standard stimulus as well as the amount of deviance from the oddball stimulus; both of these results are consistent with a repetition suppression hypothesis. Further, we find that the temporal oddball effect is sensitive to experimental context – that is, the size of the oddball effect for a particular experimental trial is influenced by the range of duration distortions seen in preceding trials.

Conclusions/Significance

Our data suggest that the repetition-related duration contractions causing the oddball effect are a result of neural repetition suppression. More generally, subjective duration may reflect the prediction error associated with a stimulus and, consequently, the efficiency of encoding that stimulus. Additionally, we emphasize that experimental context effects need to be taken into consideration when designing duration-related tasks.  相似文献   

16.

Background

The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset.

Methodology/Principal Findings

Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event.

Conclusions/Significance

This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event.  相似文献   

17.

Background

Reactions to sensory events sometimes require quick responses whereas at other times they require a high degree of accuracy–usually resulting in slower responses. It is important to understand whether visual processing under different response speed requirements employs different neural mechanisms.

Methodology/Principal Findings

We asked participants to classify visual patterns with different levels of detail as real-world or non-sense objects. In one condition, participants were to respond immediately, whereas in the other they responded after a delay of 1 second. As expected, participants performed more accurately in delayed response trials. This effect was pronounced for stimuli with a high level of detail. These behavioral effects were accompanied by modulations of stimulus related EEG gamma oscillations which are an electrophysiological correlate of early visual processing. In trials requiring speeded responses, early stimulus-locked oscillations discriminated real-world and non-sense objects irrespective of the level of detail. For stimuli with a higher level of detail, oscillatory power in a later time window discriminated real-world and non-sense objects irrespective of response speed requirements.

Conclusions/Significance

Thus, it seems plausible to assume that different response speed requirements trigger different dynamics of processing.  相似文献   

18.

Background

Converging evidence from different species indicates that some newborn vertebrates, including humans, have visual predispositions to attend to the head region of animate creatures. It has been claimed that newborn preferences for faces are domain-relevant and similar in different species. One of the most common criticisms of the work supporting domain-relevant face biases in human newborns is that in most studies they already have several hours of visual experience when tested. This issue can be addressed by testing newly hatched face-naïve chicks (Gallus gallus) whose preferences can be assessed prior to any other visual experience with faces.

Methods

In the present study, for the first time, we test the prediction that both newly hatched chicks and human newborns will demonstrate similar preferences for face stimuli over spatial frequency matched structured noise. Chicks and babies were tested using identical stimuli for the two species. Chicks underwent a spontaneous preference task, in which they have to approach one of two stimuli simultaneously presented at the ends of a runway. Human newborns participated in a preferential looking task.

Results and Significance

We observed a significant preference for orienting toward the face stimulus in both species. Further, human newborns spent more time looking at the face stimulus, and chicks preferentially approached and stood near the face-stimulus. These results confirm the view that widely diverging vertebrates possess similar domain-relevant biases toward faces shortly after hatching or birth and provide a behavioural basis for a comparison with neuroimaging studies using similar stimuli.  相似文献   

19.

Background

Perceived spatial intervals between successive flashes can be distorted by varying the temporal intervals between them (the “tau effect”). A previous study showed that a tau effect for visual flashes could be induced when they were accompanied by auditory beeps with varied temporal intervals (an audiovisual tau effect).

Methodology/Principal Findings

We conducted two experiments to investigate whether the audiovisual tau effect occurs in infancy. Forty-eight infants aged 5–8 months took part in this study. In Experiment 1, infants were familiarized with audiovisual stimuli consisting of three pairs of two flashes and three beeps. The onsets of the first and third pairs of flashes were respectively matched to those of the first and third beeps. The onset of the second pair of flashes was separated from that of the second beep by 150 ms. Following the familiarization phase, infants were exposed to a test stimulus composed of two vertical arrays of three static flashes with different spatial intervals. We hypothesized that if the audiovisual tau effect occurred in infancy then infants would preferentially look at the flash array with spatial intervals that would be expected to be different from the perceived spatial intervals between flashes they were exposed to in the familiarization phase. The results of Experiment 1 supported this hypothesis. In Experiment 2, the first and third beeps were removed from the familiarization stimuli, resulting in the disappearance of the audiovisual tau effect. This indicates that the modulation of temporal intervals among flashes by beeps was essential for the audiovisual tau effect to occur (Experiment 2).

Conclusions/Significance

These results suggest that the cross-modal processing that underlies the audiovisual tau effect occurs even in early infancy. In particular, the results indicate that audiovisual modulation of temporal intervals emerges by 5–8 months of age.  相似文献   

20.

Background

Visual perception is usually stable and accurate. However, when the two eyes are simultaneously presented with conflicting stimuli, perception falls into a sequence of spontaneous alternations, switching between one stimulus and the other every few seconds. Known as binocular rivalry, this visual illusion decouples subjective experience from physical stimulation and provides a unique opportunity to study the neural correlates of consciousness. The temporal properties of this alternating perception have been intensively investigated for decades, yet the relationship between two fundamental properties - the sequence of percepts and the duration of each percept - remains largely unexplored.

Methodology/Principal Findings

Here we examine the relationship between the percept sequence and the percept duration by quantifying their sensitivity to the strength imbalance between two monocular stimuli. We found that the percept sequence is far more susceptible to the stimulus imbalance than does the percept duration. The percept sequence always begins with the stronger stimulus, even when the stimulus imbalance is too weak to cause a significant bias in the percept duration. Therefore, introducing a small stimulus imbalance affects the percept sequence, whereas increasing the imbalance affects the percept duration, but not vice versa. To investigate why the percept sequence is so vulnerable to the stimulus imbalance, we further measured the interval between the stimulus onset and the first percept, during which subjects experienced the fusion of two monocular stimuli. We found that this interval is dramatically shortened with increased stimulus imbalance.

Conclusions/Significance

Our study shows that in binocular rivalry, the strength imblanace between monocular stimuli has a much greater impact on the percept sequence than on the percept duration, and increasing this imbalance can accelerate the process responsible for the percept sequence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号