首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When saccading to a silent clock, observers sometimes think that the second hand has paused momentarily. This effect has been termed chronostasis and occurs because observers overestimate the time that they have seen the object of an eye movement. They seem to extrapolate its appearance back to just prior to the onset of the saccade rather than the time that it is actually fixated on the retina. Here, we describe a similar effect following an arm movement: subjects overestimate the time that their hand has been in contact with a newly touched object. The illusion's magnitude suggests backward extrapolation of tactile perception to a moment during the preceding reach. The illusion does not occur if the arm movement triggers a change in a continuously visible visual target: the time of onset of the change is estimated correctly. We hypothesize that chronostasis-like effects occur when movement produces uncertainty about the onset of a sensory event. Under these circumstances, the time at which neurons with receptive fields that shift in the temporal vicinity of a movement change their mappings may be used as a time marker for the onset of perceptual properties that are only established later.  相似文献   

2.
Ono F  Kitazawa S 《PloS one》2011,6(12):e28722
Our previous research demonstrated that repetitive tone stimulation shortened the perceived duration of the preceding auditory time interval. In this study, we examined whether repetitive visual stimulation influences the perception of preceding visual time intervals. Results showed that a time interval followed by a high-frequency visual flicker was perceived as shorter than that followed by a low-frequency visual flicker. The perceived duration decreased as the frequency of the visual flicker increased. The visual flicker presented in one hemifield shortened the apparent time interval in the other hemifield. A final experiment showed that repetitive tone stimulation also shortened the perceived duration of preceding visual time intervals. We concluded that visual flicker shortened the perceived duration of preceding visual time intervals in the same way as repetitive auditory stimulation shortened the subjective duration of preceding tones.  相似文献   

3.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

4.
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception.  相似文献   

5.
IF Lin  M Kashino 《PloS one》2012,7(7):e41661
In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.  相似文献   

6.
Perceptual aftereffects following adaptation to simple stimulus attributes (e.g., motion, color) have been studied for hundreds of years. A striking recent discovery was that adaptation also elicits contrastive aftereffects in visual perception of complex stimuli and faces [1-6]. Here, we show for the first time that adaptation to nonlinguistic information in voices elicits systematic auditory aftereffects. Prior adaptation to male voices causes a voice to be perceived as more female (and vice versa), and these auditory aftereffects were measurable even minutes after adaptation. By contrast, crossmodal adaptation effects were absent, both when male or female first names and when silently articulating male or female faces were used as adaptors. When sinusoidal tones (with frequencies matched to male and female voice fundamental frequencies) were used as adaptors, no aftereffects on voice perception were observed. This excludes explanations for the voice aftereffect in terms of both pitch adaptation and postperceptual adaptation to gender concepts and suggests that contrastive voice-coding mechanisms may routinely influence voice perception. The role of adaptation in calibrating properties of high-level voice representations indicates that adaptation is not confined to vision but is a ubiquitous mechanism in the perception of nonlinguistic social information from both faces and voices.  相似文献   

7.
Two distinct conceptualisations of processing mechanisms have been proposed in the research on the perception of temporal order, one that assumes a central-timing mechanism that is involved in the detection of temporal order independent of modality and stimulus type, another one assuming feature-specific mechanisms that are dependent on stimulus properties. In the present study, four different temporal-order judgement tasks were compared to test these two conceptualisations, that is, to determine whether common processes underlie temporal-order thresholds over different modalities and stimulus types or whether distinct processes are related to each task. Measurements varied regarding modality (visual and auditory) and stimulus properties (auditory modality: clicks and tones; visual modality: colour and position). Results indicate that the click and the tone paradigm, as well as the colour and position paradigm, correlate with each other. Besides these intra-modal relationships, cross-modal correlations show dependencies between the click, the colour and the position tasks. Both processing mechanisms seem to influence the detection of temporal order. While two different tones are integrated and processed by a more independent, possibly feature-specific mechanism, a more central, modality-independent timing mechanism contributes to the click, colour and position condition.  相似文献   

8.
Wang XD  Gu F  He K  Chen LH  Chen L 《PloS one》2012,7(1):e30027

Background

Extraction of linguistically relevant auditory features is critical for speech comprehension in complex auditory environments, in which the relationships between acoustic stimuli are often abstract and constant while the stimuli per se are varying. These relationships are referred to as the abstract auditory rule in speech and have been investigated for their underlying neural mechanisms at an attentive stage. However, the issue of whether or not there is a sensory intelligence that enables one to automatically encode abstract auditory rules in speech at a preattentive stage has not yet been thoroughly addressed.

Methodology/Principal Findings

We chose Chinese lexical tones for the current study because they help to define word meaning and hence facilitate the fabrication of an abstract auditory rule in a speech sound stream. We continuously presented native Chinese speakers with Chinese vowels differing in formant, intensity, and level of pitch to construct a complex and varying auditory stream. In this stream, most of the sounds shared flat lexical tones to form an embedded abstract auditory rule. Occasionally the rule was randomly violated by those with a rising or falling lexical tone. The results showed that the violation of the abstract auditory rule of lexical tones evoked a robust preattentive auditory response, as revealed by whole-head electrical recordings of the mismatch negativity (MMN), though none of the subjects acquired explicit knowledge of the rule or became aware of the violation.

Conclusions/Significance

Our results demonstrate that there is an auditory sensory intelligence in the perception of Chinese lexical tones. The existence of this intelligence suggests that the humans can automatically extract abstract auditory rules in speech at a preattentive stage to ensure speech communication in complex and noisy auditory environments without drawing on conscious resources.  相似文献   

9.
Recently, there has been upsurge of interest in the neural mechanisms of time perception. A central question is whether the representation of time is distributed over brain regions as a function of stimulus modality, task and length of the duration used or whether it is centralized in a single specific and supramodal network. The answers seem to be converging on the former, and many areas not primarily considered as temporal processing areas remain to be investigated in the temporal domain. Here we asked whether the superior temporal gyrus, an auditory modality specific area, is involved in processing of auditory timing. Repetitive transcranial magnetic stimulation was applied over left and right superior temporal gyri while participants performed either a temporal or a frequency discrimination task of single tones. A significant decrease in performance accuracy was observed after stimulation of the right superior temporal gyrus, in addition to an increase in response uncertainty as measured by the Just Noticeable Difference. The results are specific to auditory temporal processing and performance on the frequency task was not affected. Our results further support the idea of distributed temporal processing and speak in favor of the existence of modality specific temporal regions in the human brain.  相似文献   

10.
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.  相似文献   

11.
When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.  相似文献   

12.
The task of deciding how long sensory events seem to last is one that the human nervous system appears to perform rapidly and, for sub-second intervals, seemingly without conscious effort. That these estimates can be performed within and between multiple sensory and motor domains suggest time perception forms one of the core, fundamental processes of our perception of the world around us. Given this significance, the current paucity in our understanding of how this process operates is surprising. One candidate mechanism for duration perception posits that duration may be mediated via a system of duration-selective 'channels', which are differentially activated depending on the match between afferent duration information and the channels' 'preferred' duration. However, this model awaits experimental validation. In the current study, we use the technique of sensory adaptation, and we present data that are well described by banks of duration channels that are limited in their bandwidth, sensory-specific, and appear to operate at a relatively early stage of visual and auditory sensory processing. Our results suggest that many of the computational principles the nervous system applies to coding visual spatial and auditory spectral information are common to its processing of temporal extent.  相似文献   

13.
This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.  相似文献   

14.
Whereas extensive neuroscientific and behavioral evidence has confirmed a role of auditory-visual integration in representing space [1-6], little is known about the role of auditory-visual integration in object perception. Although recent neuroimaging results suggest integrated auditory-visual object representations [7-11], substantiating behavioral evidence has been lacking. We demonstrated auditory-visual integration in the perception of face gender by using pure tones that are processed in low-level auditory brain areas and that lack the spectral components that characterize human vocalization. When androgynous faces were presented together with pure tones in the male fundamental-speaking-frequency range, faces were more likely to be judged as male, whereas when faces were presented with pure tones in the female fundamental-speaking-frequency range, they were more likely to be judged as female. Importantly, when participants were explicitly asked to attribute gender to these pure tones, their judgments were primarily based on relative pitch and were uncorrelated with the male and female fundamental-speaking-frequency ranges. This perceptual dissociation of absolute-frequency-based crossmodal-integration effects from relative-pitch-based explicit perception of the tones provides evidence for a sensory integration of auditory and visual signals in representing human gender. This integration probably develops because of concurrent neural processing of visual and auditory features of gender.  相似文献   

15.
A new and powerful procedure for determining frequency analysis in the auditory system, as evidence by the critical band, is described. The onset time difference, delta T, needed to lateralize 30-msec tone bursts toward the leading ear was measured as a function of the frequency difference, delta F, between the brust in one ear and the burst in the other ear. When delta F was less than the critical band, threshold delta T was constant at 100 mu sec or less, depending on center frequency; beyond the critical band, delta T increased with delta F. These dichotically measured critical bandwidths increased from 110 Hz at a center frequency of 500 Hz to 1100 Hz at a center frequency of 6000 Hz. They were unaffected by varying signal level from 25 to 80 dB or signal duration from 10 to 300 msec. The sam e critical-band values have been measured with monaural stimuli in loudness summation, maskin, detection, phase perception, consonance, and so forth.  相似文献   

16.
The influence of stimulus duration on auditory evoked potentials (AEPs) was examined for tones varying randomly in duration, location, and frequency in an auditory selective attention task. Stimulus duration effects were isolated as duration difference waves by subtracting AEPs to short duration tones from AEPs to longer duration tones of identical location, frequency and rise time. This analysis revealed that AEP components generally increased in amplitude and decreased in latency with increments in signal duration, with evidence of longer temporal integration times for lower frequency tones. Different temporal integration functions were seen for different N1 subcomponents. The results suggest that different auditory cortical areas have different temporal integration times, and that these functions vary as a function of tone frequency.  相似文献   

17.
Human experience of time exhibits systematic, context-dependent deviations from clock time; for example, time is experienced differently at work than on holiday. Here we test the proposal that differences from clock time in subjective experience of time arise because time estimates are constructed by accumulating the same quantity that guides perception: salient events. Healthy human participants watched naturalistic, silent videos of up to 24 seconds in duration and estimated their duration while fMRI was acquired. We were able to reconstruct trial-by-trial biases in participants’ duration reports, which reflect subjective experience of duration, purely from salient events in their visual cortex BOLD activity. By contrast, salient events in neither of two control regionsauditory and somatosensory cortex–were predictive of duration biases. These results held despite being able to (trivially) predict clock time from all three brain areas. Our results reveal that the information arising during perceptual processing of a dynamic environment provides a sufficient basis for reconstructing human subjective time duration.  相似文献   

18.
Examining real-time cortical dynamics is crucial for understanding time perception. Using magnetoencephalography we studied auditory duration discrimination of short (<.5 s) versus long tones (>.5 s) versus a pitch control. Time-frequency analysis of event-related fields showed widespread beta-band (13-30 Hz) desynchronization during all tone presentations. Synthetic aperture magnetometry indicated automatic primarily sensorimotor responses in short and pitch conditions, with activation specific to timing in bilateral inferior frontal gyrus. In the long condition, a right lateralized network was active, including lateral prefrontal cortices, inferior frontal gyrus, supramarginal gyrus and secondary auditory areas. Activation in this network peaked just after attention to tone duration was no longer necessary, suggesting a role in sustaining representation of the interval. These data expand our understanding of time perception by revealing its complex cortical spatiotemporal signature.  相似文献   

19.
Mismatch negativity (MMN) and N2b were elicited during a selective dichotic-listening task in 16 young (Y), 16 middle-aged (M) and 19 elderly (E) subjects to evaluate automatic and effortful memory comparison of auditory stimuli. Sequences of standard (80%) and deviant (20%) tones were dichotically presented to subjects in two runs. In each run, subjects were instructed to give a button-press response to the deviant (target) tones in the ear designated as attended and to ignore the input to the other ear.Peak latencies, peak amplitudes and mean amplitudes were calculated for MMN and N2b components in each subject. MMN latency and amplitude were quite stable regardless of age, while N2b latency was significantly longer in M and E subjects than in Y subjects. These results are interpreted as reflecting that automatic processes of comparison in auditory memory of stimuli presented at short interstimulus intervals remain quite stable from 23 to 77 years of age; however, those requiring attentional effort decline with age.  相似文献   

20.

Background

Current theories of interval timing assume that humans and other animals time as if using a single, absolute stopwatch that can be stopped or reset on command. Here we evaluate the alternative view that psychological time is represented by multiple clocks, and that these clocks create separate temporal contexts by which duration is judged in a relative manner. Two predictions of the multiple-clock hypothesis were tested. First, that the multiple clocks can be manipulated (stopped and/or reset) independently. Second, that an event of a given physical duration would be perceived as having different durations in different temporal contexts, i.e., would be judged differently by each clock.

Methodology/Principal Findings

Rats were trained to time three durations (e.g., 10, 30, and 90 s). When timing was interrupted by an unexpected gap in the signal, rats reset the clock used to time the “short” duration, stopped the “medium” duration clock, and continued to run the “long” duration clock. When the duration of the gap was manipulated, the rats reset these clocks in a hierarchical order, first the “short”, then the “medium”, and finally the “long” clock. Quantitative modeling assuming re-allocation of cognitive resources in proportion to the relative duration of the gap to the multiple, simultaneously timed event durations was used to account for the results.

Conclusions/Significance

These results indicate that the three event durations were effectively timed by separate clocks operated independently, and that the same gap duration was judged relative to these three temporal contexts. Results suggest that the brain processes the duration of an event in a manner similar to Einstein''s special relativity theory: A given time interval is registered differently by independent clocks dependent upon the context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号