首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech.  相似文献   

2.
Studies of brain-behaviour interactions in the field of working memory (WM) have associated WM success with activation of a fronto-parietal network during the maintenance stage, and this mainly for visuo-spatial WM. Using an inter-individual differences approach, we demonstrate here the equal importance of neural dynamics during the encoding stage, and this in the context of verbal WM tasks which are characterized by encoding phases of long duration and sustained attentional demands. Participants encoded and maintained 5-word lists, half of them containing an unexpected word intended to disturb WM encoding and associated task-related attention processes. We observed that inter-individual differences in WM performance for lists containing disturbing stimuli were related to activation levels in a region previously associated with task-related attentional processing, the left intraparietal sulcus (IPS), and this during stimulus encoding but not maintenance; functional connectivity strength between the left IPS and lateral prefrontal cortex (PFC) further predicted WM performance. This study highlights the critical role, during WM encoding, of neural substrates involved in task-related attentional processes for predicting inter-individual differences in verbal WM performance, and, more generally, provides support for attention-based models of WM.  相似文献   

3.
Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it''s been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.  相似文献   

4.
Pell MD  Kotz SA 《PloS one》2011,6(11):e27256
How quickly do listeners recognize emotions from a speaker''s voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.  相似文献   

5.
Research into speech perception by nonhuman animals can be crucially informative in assessing whether specific perceptual phenomena in humans have evolved to decode speech, or reflect more general traits. Birds share with humans not only the capacity to use complex vocalizations for communication but also many characteristics of its underlying developmental and mechanistic processes; thus, birds are a particularly interesting group for comparative study. This review first discusses commonalities between birds and humans in perception of speech sounds. Several psychoacoustic studies have shown striking parallels in seemingly speech-specific perceptual phenomena, such as categorical perception of voice-onset-time variation, categorization of consonants that lack phonetic invariance, and compensation for coarticulation. Such findings are often regarded as evidence for the idea that the objects of human speech perception are auditory or acoustic events rather than articulations. Next, I highlight recent research on the production side of avian communication that has revealed the existence of vocal tract filtering and articulation in bird species-specific vocalization, which has traditionally been considered a hallmark of human speech production. Together, findings in birds show that many of characteristics of human speech perception are not uniquely human but also that a comparative approach to the question of what are the objects of perception--articulatory or auditory events--requires careful consideration of species-specific vocal production mechanisms.  相似文献   

6.
The musician''s brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.  相似文献   

7.
Quantifying coupled spatio-temporal dynamics of phenology and hydrology and understanding underlying processes is a fundamental challenge in ecohydrology. While variation in phenology and factors influencing it have attracted the attention of ecologists for a long time, the influence of biodiversity on coupled dynamics of phenology and hydrology across a landscape is largely untested. We measured leaf area index (L) and volumetric soil water content (θ) on a co-located spatial grid to characterize forest phenology and hydrology across a forested catchment in central Pennsylvania during 2010. We used hierarchical Bayesian modeling to quantify spatio-temporal patterns of L and θ. Our results suggest that the spatial distribution of tree species across the landscape created unique spatio-temporal patterns of L, which created patterns of water demand reflected in variable soil moisture across space and time. We found a lag of about 11 days between increase in L and decline in θ. Vegetation and soil moisture become increasingly homogenized and coupled from leaf-onset to maturity but heterogeneous and uncoupled from leaf maturity to senescence. Our results provide insight into spatio-temporal coupling between biodiversity and soil hydrology that is useful to enhance ecohydrological modeling in humid temperate forests.  相似文献   

8.
Epilepsy, affecting about 1% of the population, comprises a group of neurological disorders characterized by the periodic occurrence of seizures, which disrupt normal brain function. Despite treatment with currently available antiepileptic drugs targeting neuronal functions, one third of patients with epilepsy are pharmacoresistant. In this condition, surgical resection of the brain area generating seizures remains the only alternative treatment. Studying human epileptic tissues has contributed to understand new epileptogenic mechanisms during the last 10 years. Indeed, these tissues generate spontaneous interictal epileptic discharges as well as pharmacologically-induced ictal events which can be recorded with classical electrophysiology techniques. Remarkably, multi-electrode arrays (MEAs), which are microfabricated devices embedding an array of spatially arranged microelectrodes, provide the unique opportunity to simultaneously stimulate and record field potentials, as well as action potentials of multiple neurons from different areas of the tissue. Thus MEAs recordings offer an excellent approach to study the spatio-temporal patterns of spontaneous interictal and evoked seizure-like events and the mechanisms underlying seizure onset and propagation. Here we describe how to prepare human cortical slices from surgically resected tissue and to record with MEAs interictal and ictal-like events ex vivo.  相似文献   

9.
The study of the production of co-speech gestures (CSGs), i.e., meaningful hand movements that often accompany speech during everyday discourse, provides an important opportunity to investigate the integration of language, action, and memory because of the semantic overlap between gesture movements and speech content. Behavioral studies of CSGs and speech suggest that they have a common base in memory and predict that overt production of both speech and CSGs would be preceded by neural activity related to memory processes. However, to date the neural correlates and timing of CSG production are still largely unknown. In the current study, we addressed these questions with magnetoencephalography and a semantic association paradigm in which participants overtly produced speech or gesture responses that were either meaningfully related to a stimulus or not. Using spectral and beamforming analyses to investigate the neural activity preceding the responses, we found a desynchronization in the beta band (15–25 Hz), which originated 900 ms prior to the onset of speech and was localized to motor and somatosensory regions in the cortex and cerebellum, as well as right inferior frontal gyrus. Beta desynchronization is often seen as an indicator of motor processing and thus reflects motor activity related to the hand movements that gestures add to speech. Furthermore, our results show oscillations in the high gamma band (50–90 Hz), which originated 400 ms prior to speech onset and were localized to the left medial temporal lobe. High gamma oscillations have previously been found to be involved in memory processes and we thus interpret them to be related to contextual association of semantic information in memory. The results of our study show that high gamma oscillations in medial temporal cortex play an important role in the binding of information in human memory during speech and CSG production.  相似文献   

10.
11.
Oh J  Han M  Peterson BS  Jeong J 《PloS one》2012,7(4):e34871
The timing and frequency of spontaneous eyeblinking is thought to be influenced by ongoing internal cognitive or neurophysiological processes, but how precisely these processes influence the dynamics of eyeblinking is still unclear. This study aimed to better understand the functional role of eyeblinking during cognitive processes by investigating the temporal pattern of eyeblinks during the performance of attentional tasks. The timing of spontaneous eyeblinks was recorded from 28 healthy subjects during the performance of both visual and auditory versions of the Stroop task, and the temporal distributions of eyeblinks were estimated in relation to the timing of stimulus presentation and vocal response during the tasks. We found that the spontaneous eyeblink rate increased during Stroop task performance compared with the resting rate. Importantly, the subjects (17/28 during the visual Stroop, 20/28 during the auditory Stroop) were more likely to blink before a vocal response in both tasks (150-250 msec) and the remaining subjects were more likely to blink soon after the vocal response (200-300 msec), regardless of the stimulus type (congruent or incongruent) or task difficulty. These findings show that spontaneous eyeblinks are closely associated with responses during the performance of the Stroop task on a short time scale and suggest that spontaneous eyeblinks likely signal a shift in the internal cognitive or attentional state of the subjects.  相似文献   

12.
The ionic mechanism underlying optimal stimulus shapes that induce a neuron to fire an action potential, or spike, is relevant to understanding optimal information transmission and therapeutic stimulation in the nervous system. Here we analyze for the first time the ionic basis for stimulus optimality in the Hodgkin and Huxley model and for eliciting a spike in squid giant axons, the preparation for which the model was devised. The experimentally determined stimulus is a smoothly varying biphasic current waveform having a relatively long and shallow hyperpolarizing phase followed by a depolarizing phase of briefer duration. The hyperpolarizing phase removes a small degree of the resting level of Na + channel inactivation. This result together with the subsequent depolarizing phase provides a signal that is energetically more efficient for eliciting spikes than rectangular current pulses. Sodium channel inactivation is the only variable that is changed during the stimulus waveform, other than the membrane potential, V. The activation variables for Na + and K + channels are unchanged throughout the stimulus. This result demonstrates how an optimal stimulus waveform relates to ionic dynamics and may have implications for energy efficiency of neural excitation in many systems including the mammalian brain.  相似文献   

13.
Primary progressive aphasia (PPA) is a neurodegenerative syndrome characterized by an insidious onset and gradual progression of deficits that can involve any aspect of language, including word finding, object naming, fluency, syntax, phonology and word comprehension. The initial symptoms occur in the absence of major deficits in other cognitive domains, including episodic memory, visuospatial abilities and visuoconstruction. According to recent diagnostic guidelines, PPA is typically divided into three variants: nonfluent variant PPA (also termed progressive nonfluent aphasia), semantic variant PPA (also termed semantic dementia) and logopenic/phonological variant PPA (also termed logopenic progressive aphasia). The paper describes a 79-yr old man, who presented with normal motor speech and production rate, impaired single word retrieval and phonemic errors in spontaneous speech and confrontational naming. Confrontation naming was strongly affected by lexical frequency. He was impaired on repetition of sentences and phrases. Reading was intact for regularly spelled words but not for irregular words (surface dyslexia). Comprehension was spared at the single word level, but impaired for complex sentences. He performed within the normal range on the Dutch equivalent of the Pyramids and Palm Trees (PPT) Pictures Test, indicating that semantic processing was preserved. There was, however, a slight deficiency on the PPT Words Test, which appeals to semantic knowledge of verbal associations. His core deficit was interpreted as an inability to retrieve stored lexical-phonological information for spoken word production in spontaneous speech, confrontation naming, repetition and reading aloud.  相似文献   

14.

Background

Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues.

Methodology/Principal Findings

Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery.

Conclusions/Significance

Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.  相似文献   

15.
Toxoplasma gondii is a eukaryotic parasite that forms latent cysts in the brain of immunocompetent individuals. The latent parasite infection of the immune-privileged central nervous system is linked to most complications. With no drug currently available to eliminate the latent cysts in the brain of infected hosts, the consequences of neurons'' long-term infection are unknown. It has long been known that T. gondii specifically differentiates into a latent form (bradyzoite) in neurons, but how the infected neuron responds to the infection remains to be elucidated. We have established a new in vitro model resulting in the production of mature bradyzoite cysts in brain cells. Using dual, host and parasite RNA-seq, we characterized the dynamics of differentiation of the parasite, revealing the involvement of key pathways in this process. Moreover, we identified how the infected brain cells responded to the parasite infection revealing the drastic changes that take place. We showed that neuronal-specific pathways are strongly affected, with synapse signalling being particularly affected, especially glutamatergic synapse signalling. The establishment of this new in vitro model allows investigating both the dynamics of parasite differentiation and the specific response of neurons to long-term infection by this parasite.  相似文献   

16.
The brain activity of a fully awake chimpanzee being presented with her name was investigated. Event-related potentials (ERPs) were measured for each of the following auditory stimuli: the vocal sound of the subject''s own name (SON), the vocal sound of a familiar name of another group member, the vocal sound of an unfamiliar name and a non-vocal sound. Some differences in ERP waveforms were detected between kinds of stimuli at latencies at which P3 and Nc components are typically observed in humans. Following stimulus onset, an Nc-like negative shift at approximately 500 ms latency was observed, particularly in response to SON. Such specific ERP patterns suggest that the chimpanzee processes her name differently from other sounds.  相似文献   

17.

Background

Birdsong and human vocal communication are both complex behaviours which show striking similarities mainly thought to be present in the area of development and learning. Recent studies, however, suggest that there are also parallels in vocal production mechanisms. While it has been long thought that vocal tract filtering, as it occurs in human speech, only plays a minor role in birdsong there is an increasing number of studies indicating the presence of sound filtering mechanisms in bird vocalizations as well.

Methodology/Principal Findings

Correlating high-speed X-ray cinematographic imaging of singing zebra finches (Taeniopygia guttata) to song structures we identified beak gape and the expansion of the oropharyngeal-esophageal cavity (OEC) as potential articulators. We subsequently manipulated both structures in an experiment in which we played sound through the vocal tract of dead birds. Comparing acoustic input with acoustic output showed that OEC expansion causes an energy shift towards lower frequencies and an amplitude increase whereas a wide beak gape emphasizes frequencies around 5 kilohertz and above.

Conclusion

These findings confirm that birds can modulate their song by using vocal tract filtering and demonstrate how OEC and beak gape contribute to this modulation.  相似文献   

18.
Language is a uniquely human trait, and questions of how and why it evolved have been intriguing scientists for years. Nonhuman primates (primates) are our closest living relatives, and their behavior can be used to estimate the capacities of our extinct ancestors. As humans and many primate species rely on vocalizations as their primary mode of communication, the vocal behavior of primates has been an obvious target for studies investigating the evolutionary roots of human speech and language. By studying the similarities and differences between human and primate vocalizations, comparative research has the potential to clarify the evolutionary processes that shaped human speech and language. This review examines some of the seminal and recent studies that contribute to our knowledge regarding the link between primate calls and human language and speech. We focus on three main aspects of primate vocal behavior: functional reference, call combinations, and vocal learning. Studies in these areas indicate that despite important differences, primate vocal communication exhibits some key features characterizing human language. They also indicate, however, that some critical aspects of speech, such as vocal plasticity, are not shared with our primate cousins. We conclude that comparative research on primate vocal behavior is a very promising tool for deepening our understanding of the evolution of human speech and language, but much is still to be done as many aspects of monkey and ape vocalizations remain largely unexplored.  相似文献   

19.
Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, “what”, can be estimated from the responses of best single cells with an accuracy comparable to that of the animal''s psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, “when”, can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy.

Authors Summary

In our interaction with the environment we are flooded with a stream of numerous objects and events. Our brain needs to understand the nature of these complex and rich stimuli in order to react. Research has shown ways in which a ‘what’ stimulus was presented can be encoded by the neural responses. However, to understand ‘what was the nature of the stimulus’ the brain needs to know ‘when’ the stimulus was presented. Here, we investigated how the onset of visual stimulus can be signalled by the retina to higher brain regions. We used archer fish as a framework to test the notion that the answer to the question of ‘when’ something has been presented lies within the larger cell population, whereas the answer to the question of ‘what’ has been presented may be found at the single-neuron level. The utility of the archer fish as model animal stems from its remarkable ability to shoot down insects settling on the foliage above the water level, and its ability to distinguish between artificial targets. Thus, the archer fish can provide the fish equivalent of a monkey or a human that can report psychophysical decisions.  相似文献   

20.
Theory predicts that optimality of life-long investment in reproduction is, among other factors, driven by the variability and predictability of the resources. Similarly, during the breeding season, single resource pulses characterized by short periods and high amplitudes enable strong numerical responses in their consumers. However, it is less well established how spatio-temporal dynamics in resource supplies influence the spatio-temporal variation of consumer reproduction. We used the common vole (Microtus arvalis)—white stork (Ciconia ciconia) resource—consumer model system to test the effect of increased temporal variation and periodicity of vole population dynamics on the strength of the local numerical response of storks. We estimated variability, cycle amplitude, and periodicity (by means of direct and delayed density dependence) in 13 Czech and Polish vole populations. Cross-correlation between annual stork productivity and vole abundance, characterizing the strength of the local numerical response of storks, increased when the vole population fluctuated more and population cycles were shorter. We further show that the onset of incubation of storks was delayed during the years of higher vole abundance. We demonstrate that high reproductive flexibility of a generalist consumer in tracking the temporal dynamics of its resource is driven by the properties of the local resource dynamics and we discuss possible mechanisms behind these patterns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号