首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Low spatial frequency (SF) processing has been shown to be impaired in people with schizophrenia, but it is not clear how this varies with clinical state or illness chronicity. We compared schizophrenia patients (SCZ, n = 34), first episode psychosis patients (FEP, n = 22), and healthy controls (CON, n = 35) on a gender/facial discrimination task. Images were either unaltered (broadband spatial frequency, BSF), or had high or low SF information removed (LSF and HSF conditions, respectively). The task was performed at hospital admission and discharge for patients, and at corresponding time points for controls. Groups were matched on visual acuity. At admission, compared to their BSF performance, each group was significantly worse with low SF stimuli, and most impaired with high SF stimuli. The level of impairment at each SF did not depend on group. At discharge, the SCZ group performed more poorly in the LSF condition than the other groups, and showed the greatest degree of performance decline collapsed over HSF and LSF conditions, although the latter finding was not significant when controlling for visual acuity. Performance did not change significantly over time for any group. HSF processing was strongly related to visual acuity at both time points for all groups. We conclude the following: 1) SF processing abilities in schizophrenia are relatively stable across clinical state; 2) face processing abnormalities in SCZ are not secondary to problems processing specific SFs, but are due to other known difficulties constructing visual representations from degraded information; and 3) the relationship between HSF processing and visual acuity, along with known SCZ- and medication-related acuity reductions, and the elimination of a SCZ-related impairment after controlling for visual acuity in this study, all raise the possibility that some prior findings of impaired perception in SCZ may be secondary to acuity reductions.  相似文献   

2.
来自多方面的研究表明,面孔的分类和识别位于特定脑区.同时,已有行为实验研究表明,图像的空间高低频特征在面孔分类的不同范畴中起不同的贡献,例如身份更多被低频信号传递,性别被高低频共同传递,而表情更多被高频传递.然而,空间频率在面孔分类中的贡献,其表征和神经机制目前相关研究很少.利用特定癫痫患者植入颅内电极的监控期,呈现不同类型面孔图像,同时记录其颅内脑电,用事件相关电位方法考察了据认为是面孔特定成分的相关电位的潜伏期在170 ms的波形(N170波形)的变化;用电极反应显著性分析考察了空间频率在不同分类特征上的贡献.结果表明,空间高频(HSF)图像的N170潜伏期显著延迟.只呈现空间低频(LSF)图像,N170的潜伏期对普通人面孔会延迟,而对熟悉的名人则没有这个差异.女性面孔诱发的N170在HSF条件下潜伏期明显晚于LSF条件,而男性面孔诱发的波形则不存在这个差异.表情在N170上没有体现出任何差异.但是基于电极的显著性分析表明,有更多的额叶电极参与了表情的加工;身份特征加工有更多电极在空间低频上表现出差异,而性别加工则空间高低频比较平衡.与以往行为结果不同的是,表情加工也有更多低频贡献,而且表情的差异可以在早达114 ms的时候就发生.这符合表情信息在颞枕区域有一个快速基本加工,再传递到其他脑区的认知模型.因此,空间高低频信息在身份和性别上的贡献,可能发生在经典的面孔加工脑区,由N170表达,表情信息不由N170表达,而是在颞枕较广泛的范围内快速加工再传递到别的脑区,如额叶.这是首次利用颅内脑电就空间频率在面孔分类中的贡献的神经机制进行研究,为深入理解脑内面孔各种特征加工的动态过程提供了一个新的切入点.  相似文献   

3.
Visual analysis of real-life scenes starts with the parallel extraction of different visual elementary features at different spatial frequencies. The global shape of the scene is mainly contained in low spatial frequencies (LSF), and the edges and borders of objects are mainly contained in high spatial frequencies (HSF). The present fMRI study investigates the effect of age on the spatial frequency processing in scenes. Young and elderly participants performed a categorization task (indoor vs. outdoor) on LSF and HSF scenes. Behavioral results revealed performance degradation for elderly participants only when categorizing HSF scenes. At the cortical level, young participants exhibited retinotopic organization of spatial frequency processing, characterized by medial activation in the anterior part of the occipital lobe for LSF scenes (compared to HSF), and the lateral activation in the posterior part of the occipital lobe for HSF scenes (compared to LSF). Elderly participants showed activation only in the anterior part of the occipital lobe for LSF scenes (compared to HSF), but not significant activation for HSF (compared to LSF). Furthermore, a ROI analysis revealed that the parahippocampal place area, a scene-selective region, was less activated for HSF than LSF for elderly participants only. Comparison between groups revealed greater activation of the right inferior occipital gyrus in young participants than in elderly participants for HSF. Activation of temporo-parietal regions was greater in elderly participants irrespective of spatial frequencies. The present findings indicate a specific low-contrasted HSF deficit for normal elderly people, in association with an occipito-temporal cortex dysfunction, and a functional reorganization of the categorization of filtered scenes.  相似文献   

4.
SD Kelly  BC Hansen  DT Clark 《PloS one》2012,7(8):e42620
Co-speech hand gestures influence language comprehension. The present experiment explored what part of the visual processing system is optimized for processing these gestures. Participants viewed short video clips of speech and gestures (e.g., a person saying "chop" or "twist" while making a chopping gesture) and had to determine whether the two modalities were congruent or incongruent. Gesture videos were designed to stimulate the parvocellular or magnocellular visual pathways by filtering out low or high spatial frequencies (HSF versus LSF) at two levels of degradation severity (moderate and severe). Participants were less accurate and slower at processing gesture and speech at severe versus moderate levels of degradation. In addition, they were slower for LSF versus HSF stimuli, and this difference was most pronounced in the severely degraded condition. However, exploratory item analyses showed that the HSF advantage was modulated by the range of motion and amount of motion energy in each video. The results suggest that hand gestures exploit a wide range of spatial frequencies, and depending on what frequencies carry the most motion energy, parvocellular or magnocellular visual pathways are maximized to quickly and optimally extract meaning.  相似文献   

5.
Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors.  相似文献   

6.
Emotive faces elicit neural responses even when they are not consciously perceived. We used faces hybridized from spatial frequency-filtered individual stimuli to study processing of facial emotion. Employing event-related functional magnetic resonance imaging (fMRI), we show enhanced fusiform cortex responses to hybrid faces containing fearful expressions when such emotional cues are present in the low-spatial frequency (LSF) range. Critically, this effect is independent of whether subjects use LSF or high-spatial frequency (HSF) information to make gender judgments on the hybridized faces. The magnitude of this fusiform enhancement predicts behavioral slowing in response times when participants report HSF information of the hybrid stimulus in the presence of fear in the unreported LSF components. Thus, emotional modulation of a face-responsive region of fusiform is driven by the low-frequency components of the stimulus, an effect independent of subjects' reported perception but evident in an incidental measure of behavioral performance.  相似文献   

7.
Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100–100Hz; deviant 100–300Hz) separated by a 300, 70 or 10ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6–11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100–300ms indexing sensory processing, and a later peak (LDN), at about 400–600ms, thought to reflect reorientation to the deviant stimuli or “second-look” processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens’ rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children’s responses to rapid successive auditory stimulation requires an examination of both peaks.  相似文献   

8.
Children often make letter reversal errors when first learning to read and write, even for letters whose reversed forms do not appear in normal print. However, the brain basis of such letter reversal in children learning to read is unknown. The present study compared the neuroanatomical correlates (via functional magnetic resonance imaging) and the electrophysiological correlates (via event-related potentials or ERPs) of this phenomenon in children, ages 5–12, relative to young adults. When viewing reversed letters relative to typically oriented letters, adults exhibited widespread occipital, parietal, and temporal lobe activations, including activation in the functionally localized visual word form area (VWFA) in left occipito-temporal cortex. Adults exhibited significantly greater activation than children in all of these regions; children only exhibited such activation in a limited frontal region. Similarly, on the P1 and N170 ERP components, adults exhibited significantly greater differences between typical and reversed letters than children, who failed to exhibit significant differences between typical and reversed letters. These findings indicate that adults distinguish typical and reversed letters in the early stages of specialized brain processing of print, but that children do not recognize this distinction during the early stages of processing. Specialized brain processes responsible for early stages of letter perception that distinguish between typical and reversed letters may develop slowly and remain immature even in older children who no longer produce letter reversals in their writing.  相似文献   

9.
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.  相似文献   

10.
The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information (‘what’) across different channels, temporal coincidence (‘when’) along with spatial coincidence (‘where’) provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2–3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4–9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed ‘what’ and ‘when’ properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in ‘when’ processing, the binding performance reflects the central temporal limit of integrating ‘when’ and ‘what’ properties.  相似文献   

11.
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.  相似文献   

12.
视觉运动信息的感知过程,包括从局域运动检测到对模式整体运动的感知过程.我们以蝇视觉系统的图形-背景相对运动分辨的神经回路网络为基本框架,采用初级运动检测器的六角形阵列作为输入层,构造了一种感知视觉运动信息的简化脑模型,模拟了运动信息应该神经计算模型各个层次上的处理.该模型对差分行为实验结果作出了正确预测.本文并对空间生理整合的神经机制作了讨论.  相似文献   

13.
A presently unresolved question within the face perception literature is whether attending to the location of a face modulates face processing (i.e. spatial attention). Opinions on this matter diverge along methodological lines – where neuroimaging studies have observed that the allocation of spatial attention serves to enhance the neural response to a face, findings from behavioural paradigms suggest face processing is carried out independently of spatial attention. In the present study, we reconcile this divide by using a continuous behavioural response measure that indexes face processing at a temporal resolution not available in discrete behavioural measures (e.g. button press). Using reaching trajectories as our response measure, we observed that although participants were able to process faces both when attended and unattended (as others have found), face processing was not impervious to attentional modulation. Attending to the face conferred clear benefits on sex-classification processes at less than 350ms of stimulus processing time. These findings constitute the first reliable demonstration of the modulatory effects of both spatial and temporal attention on face processing within a behavioural paradigm.  相似文献   

14.
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.  相似文献   

15.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

16.
The simultaneity of signals from different senses—such as vision and audition—is a useful cue for determining whether those signals arose from one environmental source or from more than one. To understand better the sensory mechanisms for assessing simultaneity, we measured the discrimination thresholds for time intervals marked by auditory, visual or auditory–visual stimuli, as a function of the base interval. For all conditions, both unimodal and cross-modal, the thresholds followed a characteristic ‘dipper function’ in which the lowest thresholds occurred when discriminating against a non-zero interval. The base interval yielding the lowest threshold was roughly equal to the threshold for discriminating asynchronous from synchronous presentations. Those lowest thresholds occurred at approximately 5, 15 and 75 ms for auditory, visual and auditory–visual stimuli, respectively. Thus, the mechanisms mediating performance with cross-modal stimuli are considerably slower than the mechanisms mediating performance within a particular sense. We developed a simple model with temporal filters of different time constants and showed that the model produces discrimination functions similar to the ones we observed in humans. Both for processing within a single sense, and for processing across senses, temporal perception is affected by the properties of temporal filters, the outputs of which are used to estimate time offsets, correlations between signals, and more.  相似文献   

17.
Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs.  相似文献   

18.
There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2–3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re-) integration within and beyond the hypothesized 2–3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2–3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2–3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.  相似文献   

19.

Background

Visual cross-modal re-organization is a neurophysiological process that occurs in deafness. The intact sensory modality of vision recruits cortical areas from the deprived sensory modality of audition. Such compensatory plasticity is documented in deaf adults and animals, and is related to deficits in speech perception performance in cochlear-implanted adults. However, it is unclear whether visual cross-modal re-organization takes place in cochlear-implanted children and whether it may be a source of variability contributing to speech and language outcomes. Thus, the aim of this study was to determine if visual cross-modal re-organization occurs in cochlear-implanted children, and whether it is related to deficits in speech perception performance.

Methods

Visual evoked potentials (VEPs) were recorded via high-density EEG in 41 normal hearing children and 14 cochlear-implanted children, aged 5–15 years, in response to apparent motion and form change. Comparisons of VEP amplitude and latency, as well as source localization results, were conducted between the groups in order to view evidence of visual cross-modal re-organization. Finally, speech perception in background noise performance was correlated to the visual response in the implanted children.

Results

Distinct VEP morphological patterns were observed in both the normal hearing and cochlear-implanted children. However, the cochlear-implanted children demonstrated larger VEP amplitudes and earlier latency, concurrent with activation of right temporal cortex including auditory regions, suggestive of visual cross-modal re-organization. The VEP N1 latency was negatively related to speech perception in background noise for children with cochlear implants.

Conclusion

Our results are among the first to describe cross modal re-organization of auditory cortex by the visual modality in deaf children fitted with cochlear implants. Our findings suggest that, as a group, children with cochlear implants show evidence of visual cross-modal recruitment, which may be a contributing source of variability in speech perception outcomes with their implant.  相似文献   

20.
Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号