首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.

Background

Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.

Methodology/Principal Findings

Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.

Conclusions/Significance

The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object.  相似文献   

2.

Background

Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion.

Methodology/Principal Findings

A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized.

Conclusions/Significance

We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.  相似文献   

3.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

4.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

5.
Investigation on illusory contours is important for understanding the mechanisms underlying the object recognition of human visual system. Numerous researches have shown that illusory contours formed in motion and stereopsis are generated by the unmatched features. Here we conduct three psychophysical experiments to test if Kanizsa illusory contours are also caused by unmatched information. Different types of motion (including horizontal translation, radial expanding and shrinking) are utilized in the experiments. The results show that no matter under what kind of motion, when figures or background move separately illusory contours are perceived stronger, and there is no significant difference between the perceived strength in these two types of motion. However, no such enhancement of perceived strength is found when figures and background move together. It is found that the strengthened unmatched features generate the enhancement effect of illusory contour perception in motion. Thus the results suggest that the process of unmatched information in visual system is a critical step in the formation of illusory contours.  相似文献   

6.
Object perception is one of the most important components of visual perception of human beings and mammalian animals. It is a most confusing problem on object perception that how we separate object from background and obtain the picture of the whole object. In many cases one object partly occludes the other one in natural world. When the brightness of the occluding object is the same as or similar to that of the background, though there is no difference between visual stimuli, we can still ret…  相似文献   

7.

Background

A classification image (CI) technique has shown that static luminance noise near visually completed contours affects the discrimination of fat and thin Kanizsa shapes. These influential noise regions were proposed to reveal “behavioral receptive fields” of completed contours–the same regions to which early cortical cells respond in neurophysiological studies of contour completion. Here, we hypothesized that 1) influential noise regions correspond to the surfaces that distinguish fat and thin shapes (hereafter, key regions); and 2) key region noise biases a “fat” response to the extent that its contrast polarity (lighter or darker than background) matches the shape''s filled-in surface color.

Results

To test our hypothesis, we had observers discriminate fat and thin noise-embedded rectangles that were defined by either illusory or luminance-defined contours (Experiment 1). Surrounding elements (“inducers”) caused the shapes to appear either lighter or darker than the background–a process sometimes referred to as lightness induction. For both illusory and luminance-defined rectangles, key region noise biased a fat response to the extent that its contrast polarity (light or dark) matched the induced surface color. When lightness induction was minimized, luminance noise had no consistent influence on shape discrimination. This pattern arose when pixels immediately adjacent to the discriminated boundaries were excluded from the analysis (Experiment 2) and also when the noise was restricted to the key regions so that the noise never overlapped with the physically visible edges (Experiment 3). The lightness effects did not occur in the absence of enclosing boundaries (Experiment 4).

Conclusions

Under noisy conditions, lightness induction alters visually completed shape. Moreover, behavioral receptive fields derived in CI studies do not correspond to contours per se but to filled-in surface regions contained by those contours. The relevance of lightness to two-dimensional shape completion supplies a new constraint for models of object perception.  相似文献   

8.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

9.

Background

Our expectations of an object''s heaviness not only drive our fingertip forces, but also our perception of heaviness. This effect is highlighted by the classic size-weight illusion (SWI), where different-sized objects of identical mass feel different weights. Here, we examined whether these expectations are sufficient to induce the SWI in a single wooden cube when lifted without visual feedback, by varying the size of the object seen prior to the lift.

Methodology/Principal Findings

Participants, who believed that they were lifting the same object that they had just seen, reported that the weight of the single, standard-sized cube that they lifted on every trial varied as a function of the size of object they had just seen. Seeing the small object before the lift made the cube feel heavier than it did after seeing the large object. These expectations also affected the fingertip forces that were used to lift the object when vision was not permitted. The expectation-driven errors made in early trials were not corrected with repeated lifting, and participants failed to adapt their grip and load forces from the expected weight to the object''s actual mass in the same way that they could when lifting with vision.

Conclusions/Significance

Vision appears to be crucial for the detection, and subsequent correction, of the ostensibly non-visual grip and load force errors that are a common feature of this type of object interaction. Expectations of heaviness are not only powerful enough to alter the perception of a single object''s weight, but also continually drive the forces we use to lift the object when vision is unavailable.  相似文献   

10.
Kim J  Park S  Blake R 《PloS one》2011,6(5):e19971

Background

Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits.

Methodology/Findings

In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task.Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion.

Conclusion

Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes.  相似文献   

11.

Objective

The goal of this study was to test whether central mechanisms of scratching-induced itch attenuation can be activated by scratching the limb contralateral to the itching limb when the participant is made to visually perceive the non-itching limb as the itching limb by means of mirror images.

Methods

Healthy participants were asked to assess the intensity of an experimentally induced itch at their right forearm while they observed externally guided scratch movements either at their right (itching) or left (non-itching) forearm which were either mirrored or not mirrored. In the first experiment, a mirror placed between the participant’s forearms was used to create the visual illusion that the participant’s itching (right) forearm was being scratched while in fact the non-itching (left) forearm was scratched. To control visibility of the left (non-mirrored) forearm, a second experiment was performed in which unflipped and flipped real-time video displays of the participant’s forearms were used to create experimental conditions in which the participant visually perceived scratching either on one forearm only, on both forearms, or no scratching at all.

Results

In both experiments, scratching the non-itching limb attenuated perceived itch intensity significantly and selectively in the mirror condition, i.e., when the non-itching forearm was visually perceived as the itching limb.

Discussion

These data provide evidence that the visual illusion that an itching limb is being scratched while in fact the non-itching limb contralateral to the itching limb is scratched, can lead to significant itch relief. This effect might be due to a transient illusionary intersensory perceptual congruency of visual, tactile and pruriceptive signals. “Mirror scratching” might provide an alternative treatment to reduce itch perception in focal skin diseases with persistent pruritus without causing additional harm to the affected skin and might therefore have significant clinical impact.  相似文献   

12.

Background

People with social anxiety disorder are afraid of being scrutinized by others and often feel that they are the excessive focus of other people''s attention. This study investigated whether, when compared to low socially anxious individuals, high socially anxious individuals overestimate the proportion of people in a crowd who are observing them. It was hypothesized that any potential overestimation would be modulated by self-focused attention.

Method

Forty-eight high and 48 low socially anxious participants performed a “faces in a crowd” computer task during which they briefly saw matrices of faces, which varied in terms of the proportion of people who were looking at them. Participants estimated the proportion of people who were looking at them. The task was performed once with mirrors present (to induce an enhanced self-focused state) and once without mirrors present (neutral state).

Results

Participants'' subjective estimates and the objective proportion of faces looking towards them were strongly correlated in both the high and low socially anxious groups. However, high socially anxious participants estimated that more people were looking at them than low socially anxious participants. In the first phase of the experiment, but not in the later phases, this effect was magnified in the mirror condition.

Discussion

This study provides preliminary evidence of a social anxiety related perceptual difference that may be amplified by self-focused attention. Clinical implications are discussed.  相似文献   

13.

Objective

To investigate whether specific domains of musical perception (temporal and melodic domains) predict the word-level reading skills of eight- to ten-year-old children (n = 235) with reading difficulties, normal quotient of intelligence, and no previous exposure to music education classes.

Method

A general-specific solution of the Montreal Battery of Evaluation of Amusia (MBEA), which underlies a musical perception construct and is constituted by three latent factors (the general, temporal, and the melodic domain), was regressed on word-level reading skills (rate of correct isolated words/non-words read per minute).

Results

General and melodic latent domains predicted word-level reading skills.  相似文献   

14.
Shapiro A  Lu ZL  Huang CB  Knight E  Ennis R 《PloS one》2010,5(10):e13296

Background

The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity.

Methodology/Principal Findings

The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk''s vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations.

Conclusions/Significance

The perceived shift of the disk''s direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball''s spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing.  相似文献   

15.

Background

When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization.

Methodology/Principal Findings

We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses.

Conclusions/Significance

Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception.  相似文献   

16.

Background

Beyond providing cues about an agent''s intention, communicative actions convey information about the presence of a second agent towards whom the action is directed (second-agent information). In two psychophysical studies we investigated whether the perceptual system makes use of this information to infer the presence of a second agent when dealing with impoverished and/or noisy sensory input.

Methodology/Principal Findings

Participants observed point-light displays of two agents (A and B) performing separate actions. In the Communicative condition, agent B''s action was performed in response to a communicative gesture by agent A. In the Individual condition, agent A''s communicative action was replaced with a non-communicative action. Participants performed a simultaneous masking yes-no task, in which they were asked to detect the presence of agent B. In Experiment 1, we investigated whether criterion c was lowered in the Communicative condition compared to the Individual condition, thus reflecting a variation in perceptual expectations. In Experiment 2, we manipulated the congruence between A''s communicative gesture and B''s response, to ascertain whether the lowering of c in the Communicative condition reflected a truly perceptual effect. Results demonstrate that information extracted from communicative gestures influences the concurrent processing of biological motion by prompting perception of a second agent (second-agent effect).

Conclusions/Significance

We propose that this finding is best explained within a Bayesian framework, which gives a powerful rationale for the pervasive role of prior expectations in visual perception.  相似文献   

17.

Background

In the human visual system, different attributes of an object, such as shape, color, and motion, are processed separately in different areas of the brain. This raises a fundamental question of how are these attributes integrated to produce a unified perception and a specific response. This “binding problem” is computationally difficult because all attributes are assumed to be bound together to form a single object representation. However, there is no firm evidence to confirm that such representations exist for general objects.

Methodology/Principal Findings

Here we propose a paired-attribute model in which cognitive processes are based on multiple representations of paired attributes. In line with the model''s prediction, we found that multiattribute stimuli can produce an illusory perception of a multiattribute object arising from erroneous integration of attribute pairs, implying that object recognition is based on parallel perception of paired attributes. Moreover, in a change-detection task, a feature change in a single attribute frequently caused an illusory perception of change in another attribute, suggesting that multiple pairs of attributes are stored in memory.

Conclusions/Significance

The paired-attribute model can account for some novel illusions and controversial findings on binocular rivalry and short-term memory. Our results suggest that many cognitive processes are performed at the level of paired attributes rather than integrated objects, which greatly facilitates the binding problem and provides simpler solutions for it.  相似文献   

18.
Jolij J  Meurs M 《PloS one》2011,6(4):e18861

Background

Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception.

Methods and Findings

We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers'' mood.

Conclusions

As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.  相似文献   

19.

Background

The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps.

Methodology/Principal Findings

In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds.

Conclusions/Significance

The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws.  相似文献   

20.

Background

A person is less likely to be accurately remembered if they appear in a visual scene with a gun, a result that has been termed the weapon focus effect (WFE). Explanations of the WFE argue that weapons engage attention because they are unusual and/or threatening, which causes encoding deficits for the other items in the visual scene. Previous WFE research has always embedded the weapon and nonweapon objects within a larger context that provides information about an actor''s intention to use the object. As such, it is currently unknown whether a gun automatically engages attention to a greater extent than other objects independent of the context in which it is presented.

Method

Reflexive responding to a gun compared to other objects was examined in two experiments. Experiment 1 employed a prosaccade gap-overlap paradigm, whereby participants looked toward a peripheral target, and Experiment 2 employed an antisaccade gap-overlap paradigm, whereby participants looked away from a peripheral target. In both experiments, the peripheral target was a gun or a nonthreatening object (i.e., a tomato or pocket watch). We also controlled how unexpected the targets were and compared saccadic reaction times across types of objects.

Results

A gun was not found to differentially engage attention compared to the unexpected object (i.e., a pocket watch). Some evidence was found (Experiment 2) that both the gun and the unexpected object engaged attention to a greater extent compared the expected object (i.e., a tomato).

Conclusion

An image of a gun did not engage attention to a larger extent than images of other types of objects (i.e., a pocket watch or tomato). The results suggest that context may be an important determinant of WFE. The extent to which an object is threatening may depend on the larger context in which it is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号