首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
IF Lin  M Kashino 《PloS one》2012,7(7):e41661
In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.  相似文献   

2.
This study examined the effects of attention on forming perceptual units by proximity grouping and by uniform connectedness (UC). In Experiment 1 a row of three global letters defined by either proximity or UC was presented at the center of the visual field. Participants were asked to identify the letter in the middle of stimulus arrays while ignoring the flankers. The stimulus onset asynchrony (SOA) between stimulus arrays and masks varied between 180 and 500 ms. We found that responses to targets defined by proximity grouping were slower than to those defined by UC at median SOAs but there were no differences at short or long SOAs. Incongruent flankers slowed responses to targets and this flanker compatibility effect was larger for UC than for proximity-defined flankers. Experiment 2 examined the effects of spatial precueing on discrimination responses to proximity- and UC-defined targets. The advantage for targets defined by UC over targets defined by proximity grouping was greater at uncued relative to cued locations. The results suggest that the advantage for UC over proximity grouping in forming perceptual units is contingent on the stimuli not being fully attended, and that paying attention to the stimuli differentially benefits proximity grouping.  相似文献   

3.
In order to perceive complex visual scenes, the human perceptual system has to organize discrete enti-ties in the visual field into chunks or perceptual units for higher-level processing. Perceptual organization is governed by Gestalt principles such as proximity, similarity, and continuity[1]. Thus spatially close ob-jects tend to be grouped together, as do elements that are similar to one another. Grouping based on the Ge-stalt laws (particularly proximity) is critical for the perception of…  相似文献   

4.

Background

Perceived spatial intervals between successive flashes can be distorted by varying the temporal intervals between them (the “tau effect”). A previous study showed that a tau effect for visual flashes could be induced when they were accompanied by auditory beeps with varied temporal intervals (an audiovisual tau effect).

Methodology/Principal Findings

We conducted two experiments to investigate whether the audiovisual tau effect occurs in infancy. Forty-eight infants aged 5–8 months took part in this study. In Experiment 1, infants were familiarized with audiovisual stimuli consisting of three pairs of two flashes and three beeps. The onsets of the first and third pairs of flashes were respectively matched to those of the first and third beeps. The onset of the second pair of flashes was separated from that of the second beep by 150 ms. Following the familiarization phase, infants were exposed to a test stimulus composed of two vertical arrays of three static flashes with different spatial intervals. We hypothesized that if the audiovisual tau effect occurred in infancy then infants would preferentially look at the flash array with spatial intervals that would be expected to be different from the perceived spatial intervals between flashes they were exposed to in the familiarization phase. The results of Experiment 1 supported this hypothesis. In Experiment 2, the first and third beeps were removed from the familiarization stimuli, resulting in the disappearance of the audiovisual tau effect. This indicates that the modulation of temporal intervals among flashes by beeps was essential for the audiovisual tau effect to occur (Experiment 2).

Conclusions/Significance

These results suggest that the cross-modal processing that underlies the audiovisual tau effect occurs even in early infancy. In particular, the results indicate that audiovisual modulation of temporal intervals emerges by 5–8 months of age.  相似文献   

5.
The present study examined age-related differences in multisensory integration and the effect of spatial disparity on the sound-induced flash illusion—-an illusion used in previous research to assess age-related differences in multisensory integration. Prior to participation in the study, both younger and older participants demonstrated their ability to detect 1–2 visual flashes and 1–2 auditory beep presented unimodally. After passing the pre-test, participants were then presented 1–2 flashes paired with 0–2 beeps that originated from one of five speakers positioned equidistantly 100cm from the participant. One speaker was positioned directly below the screen, two speakers were positioned 50cm to the left and right from the center of the screen, and two more speakers positioned to the left and right 100cm from the center of the screen. Participants were told to report the number of flashes presented and to ignore the beeps. Both age groups showed a significant effect of the beeps on the perceived number of flashes. However, neither younger nor older individuals showed any significant effect of spatial disparity on the sound-induced flash illusion. The presence of a congruent number of beeps increased accuracy for both older and younger individuals. Reaction time data was also analyzed. As expected, older individuals showed significantly longer reaction times when compared to younger individuals. In addition, both older and younger individuals showed a significant increase in reaction time for fusion trials, where two flashes and one beep are perceived as a single flash, as compared to congruent single flash trials. This increase in reaction time was not found for fission trials, where one flash and two beeps were perceived as two flashes. This suggests that processing may differ for the two forms for fission as compared to fusion illusions.  相似文献   

6.
In temporal ventriloquism, auditory events can illusorily attract perceived timing of a visual onset [1-3]. We investigated whether timing of a static sound can also influence spatio-temporal processing of visual apparent motion, induced here by visual bars alternating between opposite hemifields. Perceived direction typically depends on the relative interval in timing between visual left-right and right-left flashes (e.g., rightwards motion dominating when left-to-right interflash intervals are shortest [4]). In our new multisensory condition, interflash intervals were equal, but auditory beeps could slightly lag the right flash, yet slightly lead the left flash, or vice versa. This auditory timing strongly influenced perceived visual motion direction, despite providing no spatial auditory motion signal whatsoever. Moreover, prolonged adaptation to such auditorily driven apparent motion produced a robust visual motion aftereffect in the opposite direction, when measured in subsequent silence. Control experiments argued against accounts in terms of possible auditory grouping, or possible attention capture. We suggest that the motion arises because the sounds change perceived visual timing, as we separately confirmed. Our results provide a new demonstration of multisensory influences on sensory-specific perception [5], with timing of a static sound influencing spatio-temporal processing of visual motion direction.  相似文献   

7.
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver''s visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic “time window of integration” model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target–nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants'' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.  相似文献   

8.
Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.  相似文献   

9.
Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP (p < 0.05, Wilcoxon signed test; p < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty (p < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.  相似文献   

10.
Huang TR  Watanabe T 《PloS one》2012,7(4):e35946
Attention plays a fundamental role in visual learning and memory. One highly established principle of visual attention is that the harder a central task is, the more attentional resources are used to perform the task and the smaller amount of attention is allocated to peripheral processing because of limited attention capacity. Here we show that this principle holds true in a dual-task setting but not in a paradigm of task-irrelevant perceptual learning. In Experiment 1, eight participants were asked to identify either bright or dim number targets at the screen center and to remember concurrently presented scene backgrounds. Their recognition performances for scenes paired with dim/hard targets were worse than those for scenes paired with bright/easy targets. In Experiment 2, eight participants were asked to identify either bright or dim letter targets at the screen center while a task-irrelevant coherent motion was concurrently presented in the background. After five days of training on letter identification, participants improved their motion sensitivity to the direction paired with hard/dim targets improved but not to the direction paired with easy/bright targets. Taken together, these results suggest that task-irrelevant stimuli are not subject to the attentional control mechanisms that task-relevant stimuli abide.  相似文献   

11.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

12.
In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.  相似文献   

13.
Processing of tactile stimuli requires both localising the stimuli on the body surface and combining this information with a representation of the current posture. When tactile stimuli are applied to crossed hands, the system first assumes a prototypical (e.g. uncrossed) positioning of the limbs. Remapping to include the crossed posture occurs within about 300 ms. Since fingers have been suggested to be represented in a mainly somatotopic reference frame we were interested in how the processing of tactile stimuli applied to the fingers would be affected by an unusual posture of the fingers. We asked participants to report the direction of movement of two tactile stimuli, applied successively to the crossed or uncrossed index and middle fingers of one hand at different inter-stimulus intervals (15 to 700 ms). Participants almost consistently reported perceiving the stimulus direction as opposite to what it was in the fingers crossed condition, even with SOAs of 700 ms, suggesting that on average they did not incorporate the unusual relative finger positions. Therefore our results are in agreement with the idea that, by default, the processing of tactile stimuli assumes a prototypical positioning of body parts. However, in contrast to what is generally found with tactile perception with crossed hands, performance did not improve with SOAs as long as 700 ms. This suggests that the localization of stimuli in a somatotopic reference and the integration of this representation with postural information are two separate processes that apply differently to the hands and fingers.  相似文献   

14.
Spatial frequency is a fundamental visual feature coded in primary visual cortex, relevant for perceiving textures, objects, hierarchical structures, and scenes, as well as for directing attention and eye movements. Temporal amplitude-modulation (AM) rate is a fundamental auditory feature coded in primary auditory cortex, relevant for perceiving auditory objects, scenes, and speech. Spatial frequency and temporal AM rate are thus fundamental building blocks of visual and auditory perception. Recent results suggest that crossmodal interactions are commonplace across the primary sensory cortices and that some of the underlying neural associations develop through consistent multisensory experience such as audio-visually perceiving speech, gender, and objects. We demonstrate that people consistently and absolutely (rather than relatively) match specific auditory AM rates to specific visual spatial frequencies. We further demonstrate that this crossmodal mapping allows amplitude-modulated sounds to guide attention to and modulate awareness of specific visual spatial frequencies. Additional results show that the crossmodal association is approximately linear, based on physical spatial frequency, and generalizes to tactile pulses, suggesting that the association develops through multisensory experience during manual exploration of surfaces.  相似文献   

15.
The thermotropic phase behavior of four members of the homologous series of dl-methyl anteisobranched phosphatidylcholines was investigated by Fourier transform infrared spectroscopy. The odd-numbered phosphatidylcholines exhibit spectral changes in two distinct temperature ranges, while their even-numbered counterparts exhibit spectral changes within only a single temperature range. The high-temperature transition observed in the odd-numbered phosphatidylcholines and the single thermotropic event characteristic of the phase behavior of their even-numbered counterparts are both identified as gel/liquid-crystalline phase transitions. The low-temperature event exhibited only by the odd-numbered phospholipids is identified as a gel/gel phase transition that involves changes in the packing mode of the acyl chain methylene groups, as well as changes in the conformation of the glycerol ester interface. These infrared spectroscopic data thus suggest that at low temperatures the odd-numbered methyl anteisobranched phosphatidylcholines form a highly ordered condensed phase similar to the Lc phases of the linear saturated n-acyl-phosphatidylcholines. A comparable condensed phase was not formed by the even-numbered anteisobranched phosphatidylcholines under similar conditions. The properties of the gel states of the even-numbered anteisoacylphosphatidylcholines were generally similar to those of the high-temperature gel states of their odd-numbered counterparts. Those gel states exhibit spectral characteristics indicative of hexagonally packed but relatively mobile acyl chains. The temperature-dependent changes in the spectral characteristics of these gel states were continuous and were not resolved into the discrete but overlapping transitions observed by differential scanning calorimetry.  相似文献   

16.
Ku Y  Ohara S  Wang L  Lenz FA  Hsiao SS  Bodner M  Hong B  Zhou YD 《PloS one》2007,2(8):e771
Our previous studies on scalp-recorded event-related potentials (ERPs) showed that somatosensory N140 evoked by a tactile vibration in working memory tasks was enhanced when human subjects expected a coming visual stimulus that had been paired with the tactile stimulus. The results suggested that such enhancement represented the cortical activities involved in tactile-visual crossmodal association. In the present study, we further hypothesized that the enhancement represented the neural activities in somatosensory and frontal cortices in the crossmodal association. By applying independent component analysis (ICA) to the ERP data, we found independent components (ICs) located in the medial prefrontal cortex (around the anterior cingulate cortex, ACC) and the primary somatosensory cortex (SI). The activity represented by the IC in SI cortex showed enhancement in expectation of the visual stimulus. Such differential activity thus suggested the participation of SI cortex in the task-related crossmodal association. Further, the coherence analysis and the Granger causality spectral analysis of the ICs showed that SI cortex appeared to cooperate with ACC in attention and perception of the tactile stimulus in crossmodal association. The results of our study support with new evidence an important idea in cortical neurophysiology: higher cognitive operations develop from the modality-specific sensory cortices (in the present study, SI cortex) that are involved in sensation and perception of various stimuli.  相似文献   

17.
Sense of agency, the experience of controlling external events through one''s actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers'' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant''s action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.  相似文献   

18.
Lugo E  Doti R  Faubert J 《PloS one》2008,3(8):e2860

Background

Stochastic resonance is a nonlinear phenomenon whereby the addition of noise can improve the detection of weak stimuli. An optimal amount of added noise results in the maximum enhancement, whereas further increases in noise intensity only degrade detection or information content. The phenomenon does not occur in linear systems, where the addition of noise to either the system or the stimulus only degrades the signal quality. Stochastic Resonance (SR) has been extensively studied in different physical systems. It has been extended to human sensory systems where it can be classified as unimodal, central, behavioral and recently crossmodal. However what has not been explored is the extension of this crossmodal SR in humans. For instance, if under the same auditory noise conditions the crossmodal SR persists among different sensory systems.

Methodology/Principal Findings

Using physiological and psychophysical techniques we demonstrate that the same auditory noise can enhance the sensitivity of tactile, visual and propioceptive system responses to weak signals. Specifically, we show that the effective auditory noise significantly increased tactile sensations of the finger, decreased luminance and contrast visual thresholds and significantly changed EMG recordings of the leg muscles during posture maintenance.

Conclusions/Significance

We conclude that crossmodal SR is a ubiquitous phenomenon in humans that can be interpreted within an energy and frequency model of multisensory neurons spontaneous activity. Initially the energy and frequency content of the multisensory neurons'' activity (supplied by the weak signals) is not enough to be detected but when the auditory noise enters the brain, it generates a general activation among multisensory neurons of different regions, modifying their original activity. The result is an integrated activation that promotes sensitivity transitions and the signals are then perceived. A physiologically plausible model for crossmodal stochastic resonance is presented.  相似文献   

19.
R N Lewis  R N McElhaney 《Biochemistry》1985,24(18):4903-4911
The thermotropic phase behavior of aqueous dispersions of 10 phosphatidylcholines containing omega-cyclohexyl-substituted acyl chains was studied by differential scanning calorimetry and 31P nuclear magnetic resonance spectroscopy. The presence of the omega-cyclohexyl group has a profound effect on the thermotropic phase behavior of these compounds in a manner dependent on whether the fatty acyl chains have odd- or even-numbered linear carbon segments. The thermotropic phase behavior of the odd-numbered phosphatidylcholines is characterized by a single heating endotherm that was shown to be a superposition of at least two structural events by calorimetric cooling experiments. 31P NMR spectroscopy also showed that the single endotherm of the odd-chain compounds is the structural equivalent of a concomitant gel-gel and gel to liquid-crystalline phase transition. The calorimetric behavior of the even-numbered phosphatidylcholines is characterized by a complex array of gel-state phenomena, in addition to the chain-melting transition, in both the heating and cooling modes. The gel states of these even-numbered compounds are characterized by a relatively greater mobility of the phosphate head group as seen by 31P NMR spectroscopy. The differences between the odd-numbered and even-numbered compounds are reflected in a pronounced odd-even alternation in the characteristic transition temperatures and enthalpies and in differences in their responses to changes in the composition of the bulk aqueous phase. Moreover, both the odd-numbered and even-numbered omega-cyclohexylphosphatidylcholines exhibit significantly lower chain-melting transition temperatures and enthalpies than do linear saturated phosphatidylcholines of comparable chain length.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

20.
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer''s discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号