首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

2.

Background

The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.

Methodology/Findings

We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.

Conclusions/Significance

These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions.  相似文献   

3.

Background

An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.

Methodology/Principal Findings

Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.

Conclusions/Significance

The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.  相似文献   

4.
Kin recognition in Bufo scaber tadpoles: ontogenetic changes and mechanism   总被引:1,自引:0,他引:1  
Ontogenetic changes in kin-recognition behavior, effect of social environment on kin-recognition ability, and use of visual and chemical cues in kin recognition have been studied in tadpoles of Bufo scaber after rearing them with kin, in mixed groups, or in isolation from Gosner stage 12 (gastrula). By use of a rectangular choice tank the tadpoles were tested for their ability to choose between (a) familiar siblings and unfamiliar non-siblings, (b) unfamiliar siblings and familiar non-siblings, and (c) unfamiliar siblings and unfamiliar non-siblings. When tested without any stimulus groups in the end compartments of the tank, random distribution was observed for the tadpoles and no bias for the apparatus or the procedure. In the presence of kin and non-kin in the end compartments, significantly more tadpoles spent most of their time near kin (familiar or unfamiliar) rather than near non-kin during early larval stages, up to stage 37. After stage 37 (characterized by the differentiation of toes), test tadpoles showed no preference to associate with kin, suggesting an ontogenetic shift in the kin-recognition ability in B. scaber. In experiments involving selective blockade of visual or chemical cues the test tadpoles preferentially associated near their kin on the basis of chemical rather than visual cues. These findings suggest that familiarity with siblings is not necessary for kin recognition and that kin-recognition ability is not modified after exposure to non-kin by mixed rearing. The findings for B. scaber indicate a self referent phenotype matching mechanism of kin recognition which is predominantly aided by chemical rather than visual cues.  相似文献   

5.
The visual pigments and oil droplets in the retina of the diurnal gecko Gonatodes albogularis were examined microspectrophotometrically, and the spectral sensitivity under various adapting conditions was recorded using electrophysiological responses. Three classes of visual pigments were identified, with max at about 542, 475, and 362 nm. Spectral sensitivity functions revealed a broad range of sensitivity, with a peak at approximately 530–540 nm. The cornea and oil droplets were found to be transparent across a range from 350–700 nm, but the lens absorbed short wavelength light below 450 nm. Despite the filtering effect of the lens, a secondary peak in spectral sensitivity to ultraviolet wavelengths was found. These results suggest that G. albogularis does possess the visual mechanisms for discrimination of the color pattern of conspecifics based on either hue or brightness. These findings are discussed in terms of the variation in coloration and social behavior of Gonatodes.Abbreviations ERG electroretinogram - MSP microspectrophotometry - UV ultraviolet - max wavelength of maximum absorbance  相似文献   

6.

Background

A crucial question for understanding sentence comprehension is the openness of syntactic and semantic processes for other sources of information. Using event-related potentials in a dual task paradigm, we had previously found that sentence processing takes into consideration task relevant sentence-external semantic but not syntactic information. In that study, internal and external information both varied within the same linguistic domain—either semantic or syntactic. Here we investigated whether across-domain sentence-external information would impact within-sentence processing.

Methodology

In one condition, adjectives within visually presented sentences of the structure [Det]-[Noun]-[Adjective]-[Verb] were semantically correct or incorrect. Simultaneously with the noun, auditory adjectives were presented that morphosyntactically matched or mismatched the visual adjectives with respect to gender.

Findings

As expected, semantic violations within the sentence elicited N400 and P600 components in the ERP. However, these components were not modulated by syntactic matching of the sentence-external auditory adjective. In a second condition, syntactic within-sentence correctness-variations were combined with semantic matching variations between the auditory and the visual adjective. Here, syntactic within-sentence violations elicited a LAN and a P600 that did not interact with semantic matching of the auditory adjective. However, semantic mismatching of the latter elicited a frontocentral positivity, presumably related to an increase in discourse level complexity.

Conclusion

The current findings underscore the open versus algorithmic nature of semantic and syntactic processing, respectively, during sentence comprehension.  相似文献   

7.
Franklin DW  So U  Burdet E  Kawato M 《PloS one》2007,2(12):e1336

Background

When learning to perform a novel sensorimotor task, humans integrate multi-modal sensory feedback such as vision and proprioception in order to make the appropriate adjustments to successfully complete the task. Sensory feedback is used both during movement to control and correct the current movement, and to update the feed-forward motor command for subsequent movements. Previous work has shown that adaptation to stable dynamics is possible without visual feedback. However, it is not clear to what degree visual information during movement contributes to this learning or whether it is essential to the development of an internal model or impedance controller.

Methodology/Principle Findings

We examined the effects of the removal of visual feedback during movement on the learning of both stable and unstable dynamics in comparison with the case when both vision and proprioception are available. Subjects were able to learn to make smooth movements in both types of novel dynamics after learning with or without visual feedback. By examining the endpoint stiffness and force after learning it could be shown that subjects adapted to both types of dynamics in the same way whether they were provided with visual feedback of their trajectory or not. The main effects of visual feedback were to increase the success rate of movements, slightly straighten the path, and significantly reduce variability near the end of the movement.

Conclusions/Significance

These findings suggest that visual feedback of the hand during movement is not necessary for the adaptation to either stable or unstable novel dynamics. Instead vision appears to be used to fine-tune corrections of hand trajectory at the end of reaching movements.  相似文献   

8.

Background

Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.

Methodology/Principal Findings

Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.

Conclusions/Significance

These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order.  相似文献   

9.
A tenet of auditory scene analysis is that we can fully process only one stream of auditory information at a time. We tested this assumption in a gleaning bat, the pallid bat (Antrozous pallidus) because this bat uses echolocation for general orientation, and relies heavily on prey-generated sounds to detect and locate its prey. It may therefore encounter situations in which the echolocation and passive listening streams temporally overlap. Pallid bats were trained to a dual task in which they had to negotiate a wire array, using echolocation, and land on one of 15 speakers emitting a brief noise burst in order to obtain a food reward. They were forced to process both streams within a narrow 300 to 500 ms time window by having the noise burst triggered by the bats initial echolocation pulses as it approached the wire array. Relative to single task controls, echolocation and passive sound localization performance was slightly, but significantly, degraded. The bats also increased echolocation interpulse intervals during the dual task, as though attempting to reduce temporal overlap between the signals. These results suggest that the bats, like humans, have difficulty in processing more than one stream of information at a time.  相似文献   

10.

Background

Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals.

Methodology/Principal Findings

We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age.

Conclusions/Significance

This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task.  相似文献   

11.

Background

In contrast to traditional views that consider smooth pursuit as a relatively automatic process, evidence has been reported for the importance of attention for accurate pursuit performance. However, the exact role that attention might play in the maintenance of pursuit remains unclear.

Methodology/Principal Findings

We analysed the neuronal activity associated with healthy subjects executing smooth pursuit eye movements (SPEM) during concurrent attentive tracking of a moving sound source, which was either in-phase or in antiphase to the executed eye movements. Assuming that attentional resources must be allocated to the moving sound source, the simultaneous execution of SPEM and auditory tracking in diverging directions should result in increased load on common attentional resources. By using an auditory stimulus as a distractor rather then a visual stimulus we guaranteed that cortical activity cannot be caused by conflicts between two simultaneous visual motion stimuli. Our results revealed that the smooth pursuit task with divided attention led to significantly higher activations bilaterally in the posterior parietal cortex and lateral and medial frontal cortex, presumably containing the parietal, frontal and supplementary eye fields respectively.

Conclusions

The additional cortical activation in these areas is apparently due to the process of dividing attention between the execution of SPEM and the covert tracking of the auditory target. On the other hand, even though attention had to be divided the attentional resources did not seem to be exhausted, since the identification of the direction of the auditory target and the quality of SPEM were unaffected by the congruence between visual and auditory motion stimuli. Finally, we found that this form of task-related attention modulated not only the cortical pursuit network in general but also affected modality specific and supramodal attention regions.  相似文献   

12.

Background

Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

Methodology/Principal Findings

In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

Conclusions

The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children''s perceptual representations of speech sound categories.  相似文献   

13.
An A  Sun M  Wang Y  Wang F  Ding Y  Song Y 《PloS one》2012,7(4):e34826

Background

Practice improves human performance in many psychophysical paradigms. This kind of improvement is thought to be the evidence of human brain plasticity. However, the changes that occur in the brain are not fully understood.

Methodology/Principal Findings

The N2pc component has previously been associated with visuo-spatial attention. In this study, we used event-related potentials (ERPs) to investigate whether the N2pc component changed during long-term visual perceptual learning. Thirteen subjects completed several days of training in an orientation discrimination task, and were given a final test 30 days later. The results showed that behavioral thresholds significantly decreased across training sessions, and this decrement was also present in the untrained visual field. ERPs showed training significantly increased the N2pc amplitude, and this effect could be maintained for up to 30 days. However, the increase in N2pc was specific to the trained visual field.

Conclusion/Significance

Training caused spatial attention to be increasingly focused on the target positions. However, this process was not transferrable from the trained to the untrained visual field, which suggests that the increase in N2pc may be unnecessary for behavioral improvements in the untrained visual field.  相似文献   

14.
Reproductive success in male primates can be influenced by testosterone (T) and cortisol (C). We examined them in wild Saguinus mystax via fecal hormone analysis. Firstly, we wanted to characterize male hormonal status over the course of the year. Further we tested the influence of the reproductive status of the breeding female, social instability, and intergroup encounter rates on T levels, comparing the results with predictions of the challenge hypothesis (Wingfield et al., 1990). We also tested for interindividual differences in hormonal levels, possibly related to social or breeding status. We collected data during a 12-mo study on 2 groups of moustached tamarins at the Estación Biológica Quebrada Blanco in northeastern Peru. We found fairly similar T and C levels over the course of the year for all males. Yet an elevation of T shortly after the birth of infants, during the phase of ovarian inactivity of the groups breeding female, was evident. Hormonal levels were not significantly elevated during a phase of social instability, did not correlate with intergroup encounter rates, and did not differ between breeding and nonbreeding males. Our results confirm the challenge hypothesis (Wingfield et al., 1990). The data suggest that reproductive competition in moustached tamarins is not based on endocrinological, but instead on behavioral mechanisms, possibly combined with sperm competition.  相似文献   

15.
The ability to learn is universal among animals; we investigate associative learning between odors and tastants in larval Drosophila melanogaster. As biologically important gustatory stimuli, like sugars, salts, or bitter substances have many behavioral functions, we investigate not only their reinforcing function, but also their response-modulating and response-releasing function. Concerning the response-releasing function, larvae are attracted by fructose and repelled by sodium chloride and quinine; also, fructose increases, but salt and quinine suppress feeding. However, none of these stimuli has a nonassociative, modulatory effect on olfactory choice behavior. Finally, only fructose but neither salt nor quinine has a reinforcing effect in associative olfactory learning. This implies that the response-releasing, response-modulating and reinforcing functions of these tastants are dissociated on the behavioral level. These results open the door to analyze how this dissociation is brought about on the cellular and molecular level; this should be facilitated by the cellular simplicity and genetic accessibility of the Drosophila larva.  相似文献   

16.
Korenyuk  I. I. 《Neurophysiology》2000,32(6):376-382
In acute experiments on cats, we studied the impulse activity of 262 neurons of the parietal associative zone (PAZ, field 5). Among them, 129 cells [100 silent units and 29 units generating background activity (BA)] were identified as output neurons, while 133 cells with the BA were interneurons of the intrinsic cortical neuronal circuits. Electrical stimulation of the primary visual, auditory, or somatosensory cortices evoked no impulse responses in silent output PAZ neurons, while output neurons with the BA and interneurons (more than 65 and 80% of the cell units, respectively) generated clear responses (more frequently, phasic). Stimulation of the auditory and visual cortices exerted mostly inhibitory effects, while stimulation of the somatosensory cortex provided mostly excitatory influences. The ratios of neurons generating primary excitatory and inhibitory responses to stimulation of the visual, auditory, and somatic cortices were 0.3:1, 0.6:1, and 3.2:1, respectively. More than 95% of the field-5 neurons were influenced from the primary sensory zones via di- and/or polysynaptic pathways. Monosynaptic excitatory inputs from the visual cortex were identified for 3.8% of interneurons and 6.9% of output PAZ neurons; for the auditory cortical inputs, the respective figures were 1.7 and 3.5%. Monosynaptic connections with the somatic cortex were found only for 4% of the interneurons under study. It has been concluded that interaction of heteromodal signals coming to the PAZ via the corticopetal and associative inputs occurs on neurons of all the cortical layers.  相似文献   

17.

Objective

Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball.

Methods

Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude.

Results

Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy.

Conclusions

Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection.

Significance

Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.  相似文献   

18.
Olfactory deterrents have been proposed as tree protectants against attack by bark beetles, but their development has been hindered by a lack of knowledge of host selection behavior. Among the primary tree-killing (aggressive) Dendroctonus, vision appears to be an integral part of the host selection process. We evaluated the importance of vision in host finding by D. brevicomis LeConte, and our ability to affect it by modifying the visual stimulus provided by attractant-baited multiple-funnel traps. White-painted traps caught 42% fewer D. brevicomis than black traps in California, USA (P < 0.05). Visual treatments were less effective (P < 0.0001) than olfactory disruptants (verbenone with ipsdienol), which reduced catch by about 78%. When combined, olfactory and visual disruptants resulted in 89% fewer D. brevicomis being caught, but this combination was not more effective than olfactory disruptants alone (P > 0.05). Our results demonstrate that the visual component of D. brevicomis host finding behavior can be manipulated, but that D. brevicomis may be more affected by olfactory than visual disruptants. In contrast, visual disruption is more pronounced in the southern pine beetle, Dendroctonus frontalis Zimmermann, suggesting that non-insecticidal tree protection strategies for these related species should differ.  相似文献   

19.
Summary The topographical organization of the prothoracic ganglion of the cricket, Gryllus campestris L., is described from horizontal, transverse, and sagittal sections of preparations specially treated to elucidate longitudinal tracts, commissures, and areas of neuropil. These structures were compared to those reported from other insect thoracic ganglia, resulting in still further evidence for a common basic morphological pattern among insect central nervous systems.Six types of auditory interneurons, all existing as mirrorimage pairs, were identified through intracellular application of the dye Lucifer yellow, and then related to several morphological patterns. Two intrasegmental neurons (ON1, ON2) are similar in location of cell bodies and course of neurites and axons; three intersegmental neurons (AN1, AN2, TN1) are likewise similar to one another. The axons of the two intrasegmental neurons cross the midline of the ganglion in the newly described omega commissure. Axons of the other four types all course within the median portion of the ventral intermediate tract and project intersegmentally.All six neuron types arborize within the ventral portion of the ring tract, the same neuropilar region in which auditory sensory neurons terminate. The ring tract is therefore considered the most important region for auditory information processing within the cricket prothoracic ganglion.  相似文献   

20.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号