首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background and Purpose

Amnestic mild cognitive impairment (aMCI) is a putative prodromal stage of Alzheimer''s disease (AD) characterized by deficits in episodic verbal memory. Our goal in the present study was to determine whether executive dysfunction may also be detectable in individuals diagnosed with aMCI.

Methods

This study used a hidden maze learning test to characterize component processes of visuospatial executive function and learning in a sample of 62 individuals with aMCI compared with 94 healthy controls.

Results

Relative to controls, individuals with aMCI made more exploratory/learning errors (Cohen''s d = .41). Comparison of learning curves revealed that the slope between the first two of five learning trials was four times as steep for controls than for individuals with aMCI (Cohen''s d = .64). Individuals with aMCI also made a significantly greater number of rule-break/error monitoring errors across learning trials (Cohen''s d = .21).

Conclusions

These results suggest that performance on a task of complex visuospatial executive function is compromised in individuals with aMCI, and likely explained by reductions in initial strategy formulation during early visual learning and “on-line” maintenance of task rules.  相似文献   

2.
Ho C  Cheung SH 《PloS one》2011,6(12):e28814

Background

Human object recognition degrades sharply as the target object moves from central vision into peripheral vision. In particular, one''s ability to recognize a peripheral target is severely impaired by the presence of flanking objects, a phenomenon known as visual crowding. Recent studies on how visual awareness of flanker existence influences crowding had shown mixed results. More importantly, it is not known whether conscious awareness of the existence of both the target and flankers are necessary for crowding to occur.

Methodology/Principal Findings

Here we show that crowding persists even when people are completely unaware of the flankers, which are rendered invisible through the continuous flash suppression technique. Contrast threshold for identifying the orientation of a grating pattern was elevated in the flanked condition, even when the subjects reported that they were unaware of the perceptually suppressed flankers. Moreover, we find that orientation-specific adaptation is attenuated by flankers even when both the target and flankers are invisible.

Conclusions

These findings complement the suggested correlation between crowding and visual awareness. What''s more, our results demonstrate that conscious awareness and attention are not prerequisite for crowding.  相似文献   

3.

Background

It is well documented that East Asians differ from Westerners in conscious perception and attention. However, few studies have explored cultural differences in unconscious processes such as implicit learning.

Methodology/Principal Findings

The global-local Navon letters were adopted in the serial reaction time (SRT) task, during which Chinese and British participants were instructed to respond to global or local letters, to investigate whether culture influences what people acquire in implicit sequence learning. Our results showed that from the beginning British expressed a greater local bias in perception than Chinese, confirming a cultural difference in perception. Further, over extended exposure, the Chinese learned the target regularity better than the British when the targets were global, indicating a global advantage for Chinese in implicit learning. Moreover, Chinese participants acquired greater unconscious knowledge of an irrelevant regularity than British participants, indicating that the Chinese were more sensitive to contextual regularities than the British.

Conclusions/Significance

The results suggest that cultural biases can profoundly influence both what people consciously perceive and unconsciously learn.  相似文献   

4.

Background

In predictive spatial cueing studies, reaction times (RT) are shorter for targets appearing at cued locations (valid trials) than at other locations (invalid trials). An increase in the amplitude of early P1 and/or N1 event-related potential (ERP) components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects.

Methodology/Principal Findings

Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN), to evaluate whether visual selection (as indexed by the N2pc) and visual-short term memory processes (as indexed by the SPCN) are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect.

Conclusions/Significance

Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.  相似文献   

5.

Background

Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used.

Methodology/Principal Findings

We tested this hypothesis by scanning healthy human participants'' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants'' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position.

Conclusions/Significance

These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use.  相似文献   

6.

Background

It is well documented that positive rather than negative moods encourage integrative processing of conscious information. However, the extent to which implicit or unconscious learning can be influenced by affective states remains unclear.

Methodology/Principal Findings

A Serial Reaction Time (SRT) task with sequence structures requiring integration over past trials was adopted to examine the effect of affective states on implicit learning. Music was used to induce and maintain positive and negative affective states. The present study showed that participants in negative rather than positive states learned less of the regularity. Moreover, the knowledge was shown by a Bayesian analysis to be largely unconscious as participants were poor at recognizing the regularity.

Conclusions/Significance

The results demonstrated that negative rather than positive affect inhibited implicit learning of complex structures. Our findings help to understand the effects of affective states on unconscious or implicit processing.  相似文献   

7.

Background

Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account.

Methodology/Principal Findings

We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side.

Conclusions

While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.  相似文献   

8.
9.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

10.

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants'' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.  相似文献   

11.

Background

Research on multisensory integration during natural tasks such as reach-to-grasp is still in its infancy. Crossmodal links between vision, proprioception and audition have been identified, but how olfaction contributes to plan and control reach-to-grasp movements has not been decisively shown. We used kinematics to explicitly test the influence of olfactory stimuli on reach-to-grasp movements.

Methodology/Principal Findings

Subjects were requested to reach towards and grasp a small or a large visual target (i.e., precision grip, involving the opposition of index finger and thumb for a small size target and a power grip, involving the flexion of all digits around the object for a large target) in the absence or in the presence of an odour evoking either a small or a large object that if grasped would require a precision grip and a whole hand grasp, respectively. When the type of grasp evoked by the odour did not coincide with that for the visual target, interference effects were evident on the kinematics of hand shaping and the level of synergies amongst fingers decreased. When the visual target and the object evoked by the odour required the same type of grasp, facilitation emerged and the intrinsic relations amongst individual fingers were maintained.

Conclusions/Significance

This study demonstrates that olfactory information contains highly detailed information able to elicit the planning for a reach-to-grasp movement suited to interact with the evoked object. The findings offer a substantial contribution to the current debate about the multisensory nature of the sensorimotor transformations underlying grasping.  相似文献   

12.

Background

During sentence processing we decode the sequential combination of words, phrases or sentences according to previously learned rules. The computational mechanisms and neural correlates of these rules are still much debated. Other key issue is whether sentence processing solely relies on language-specific mechanisms or is it also governed by domain-general principles.

Methodology/Principal Findings

In the present study, we investigated the relationship between sentence processing and implicit sequence learning in a dual-task paradigm in which the primary task was a non-linguistic task (Alternating Serial Reaction Time Task for measuring probabilistic implicit sequence learning), while the secondary task were a sentence comprehension task relying on syntactic processing. We used two control conditions: a non-linguistic one (math condition) and a linguistic task (word processing task). Here we show that the sentence processing interfered with the probabilistic implicit sequence learning task, while the other two tasks did not produce a similar effect.

Conclusions/Significance

Our findings suggest that operations during sentence processing utilize resources underlying non-domain-specific probabilistic procedural learning. Furthermore, it provides a bridge between two competitive frameworks of language processing. It appears that procedural and statistical models of language are not mutually exclusive, particularly for sentence processing. These results show that the implicit procedural system is engaged in sentence processing, but on a mechanism level, language might still be based on statistical computations.  相似文献   

13.
Alzheimer’s disease (AD) is the most frequent cause of dementia. The clinical symptoms of AD begin with impairment of memory and executive function followed by the gradual involvement of other functions, such as language, semantic knowledge, abstract thinking, attention, and visuospatial abilities. Visuospatial function involves the identification of a stimulus and its location and can be impaired at the beginning of AD. The Visual Object and Space Perception (VOSP) battery evaluates visuospatial function, while minimizing the interference of other cognitive functions.

Objectives

To evaluate visuospatial function in early AD patients using the VOSP and determine cutoff scores to differentiate between cognitively healthy individuals and AD patients.

Methods

Thirty-one patients with mild AD and forty-four healthy elderly were evaluated using a neuropsychological battery and the VOSP.

Results

In the VOSP, the AD patients performed more poorly in all subtests examining object perception and in two subtests examining space perception (Number Location and Cube Analysis). The VOSP showed good accuracy and good correlation with tests measuring visuospatial function.

Conclusion

Visuospatial function is impaired in the early stages of AD. The VOSP battery is a sensitive battery test for visuospatial deficits with minimal interference by other cognitive functions.  相似文献   

14.

Background

An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.

Methodology/Principal Findings

Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.

Conclusions/Significance

The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.  相似文献   

15.

Background

When two targets are presented in close temporal proximity amongst a rapid serial visual stream of distractors, a period of disrupted attention and attenuated awareness lasting 200–500 ms follows identification of the first target (T1). This phenomenon is known as the “attentional blink” (AB) and is generally attributed to a failure to consolidate information in visual short-term memory due to depleted or disrupted attentional resources. Previous research has shown that items presented during the AB that fail to reach conscious awareness are still processed to relatively high levels, including the level of meaning. For example, missed word stimuli have been shown to prime later targets that are closely associated words. Although these findings have been interpreted as evidence for semantic processing during the AB, closely associated words (e.g., day-night) may also rely on specific, well-worn, lexical associative links which enhance attention to the relevant target.

Methodology/Principal Findings

We used a measure of semantic distance to create prime-target pairs that are conceptually close, but have low word associations (e.g., wagon and van) and investigated priming from a distractor stimulus presented during the AB to a subsequent target (T2). The stimuli were words (concrete nouns) in Experiment 1 and the corresponding pictures of objects in Experiment 2. In both experiments, report of T2 was facilitated when this item was preceded by a semantically-related distractor.

Conclusions/Significance

This study is the first to show conclusively that conceptual information is extracted from distractor stimuli presented during a period of attenuated awareness and that this information spreads to neighbouring concepts within a semantic network.  相似文献   

16.

Background

Visual neglect is an attentional deficit typically resulting from parietal cortex lesion and sometimes frontal lesion. Patients fail to attend to objects and events in the visual hemifield contralateral to their lesion during visual search.

Methodology/Principal Finding

The aim of this work was to examine the effects of parietal and frontal lesion in an existing computational model of visual attention and search and simulate visual search behaviour under lesion conditions. We find that unilateral parietal lesion in this model leads to symptoms of visual neglect in simulated search scan paths, including an inhibition of return (IOR) deficit, while frontal lesion leads to milder neglect and to more severe deficits in IOR and perseveration in the scan path. During simulations of search under unilateral parietal lesion, the model''s extrastriate ventral stream area exhibits lower activity for stimuli in the neglected hemifield compared to that for stimuli in the normally perceived hemifield. This could represent a computational correlate of differences observed in neuroimaging for unconscious versus conscious perception following parietal lesion.

Conclusions/Significance

Our results lead to the prediction, supported by effective connectivity evidence, that connections between the dorsal and ventral visual streams may be an important factor in the explanation of perceptual deficits in parietal lesion patients and of conscious perception in general.  相似文献   

17.

Background

The capacity of visual working memory (WM) is substantially limited and only a fraction of what we see is maintained as a temporary trace. The process of binding visual features has been proposed as an adaptive means of minimising information demands on WM. However the neural mechanisms underlying this process, and its modulation by task and load effects, are not well understood.

Objective

To investigate the neural correlates of feature binding and its modulation by WM load during the sequential phases of encoding, maintenance and retrieval.

Methods and Findings

18 young healthy participants performed a visuospatial WM task with independent factors of load and feature conjunction (object identity and position) in an event-related functional MRI study. During stimulus encoding, load-invariant conjunction-related activity was observed in left prefrontal cortex and left hippocampus. During maintenance, greater activity for task demands of feature conjunction versus single features, and for increased load was observed in left-sided regions of the superior occipital cortex, precuneus and superior frontal cortex. Where these effects were expressed in overlapping cortical regions, their combined effect was additive. During retrieval, however, an interaction of load and feature conjunction was observed. This modulation of feature conjunction activity under increased load was expressed through greater deactivation in medial structures identified as part of the default mode network.

Conclusions and Significance

The relationship between memory load and feature binding qualitatively differed through each phase of the WM task. Of particular interest was the interaction of these factors observed within regions of the default mode network during retrieval which we interpret as suggesting that at low loads, binding processes may be ‘automatic’ but at higher loads it becomes a resource-intensive process leading to disengagement of activity in this network. These findings provide new insights into how feature binding operates within the capacity-limited WM system.  相似文献   

18.

Background

Previous research has shown that visuospatial processing requiring working memory is particularly important for balance control during standing and stepping, and that limited spatial encoding contributes to increased interference in postural control dual tasks. However, visuospatial involvement during locomotion has not been directly determined. This study examined the effects of a visuospatial cognitive task versus a nonspatial cognitive task on gait speed, smoothness and variability in older people, while controlling for task difficulty.

Methods

Thirty-six people aged ≥75 years performed three walking trials along a 20 m walkway under the following conditions: (i) an easy nonspatial task; (ii) a difficult nonspatial task; (iii) an easy visuospatial task; and (iv) a difficult visuospatial task. Gait parameters were computed from a tri-axial accelerometer attached to the sacrum. The cognitive task response times and percentage of correct answers during walking and seated trials were also computed.

Results

No significant differences in either cognitive task type error rates or response times were evident in the seated conditions, indicating equivalent task difficulty. In the walking trials, participants responded faster to the visuospatial tasks than the nonspatial tasks but at the cost of making significantly more cognitive task errors. Participants also walked slower, took shorter steps, had greater step time variability and less smooth pelvis accelerations when concurrently performing the visuospatial tasks compared with the nonspatial tasks and when performing the difficult compared with the easy cognitive tasks.

Conclusions

Compared with nonspatial cognitive tasks, visuospatial cognitive tasks led to a slower, more variable and less smooth gait pattern. These findings suggest that visuospatial processing might share common networks with locomotor control, further supporting the hypothesis that gait changes during dual task paradigms are not simply due to limited attentional resources but to competition for common networks for spatial information encoding.  相似文献   

19.

Background

Several task-based functional MRI (fMRI) studies have highlighted abnormal activation in specific regions involving the low-level perceptual (auditory, visual, and somato-motor) network in posttraumatic stress disorder (PTSD) patients. However, little is known about whether the functional connectivity of the low-level perceptual and higher-order cognitive (attention, central-execution, and default-mode) networks change in medication-naïve PTSD patients during the resting state.

Methods

We investigated the resting state networks (RSNs) using independent component analysis (ICA) in 18 chronic Wenchuan earthquake-related PTSD patients versus 20 healthy survivors (HSs).

Results

Compared to the HSs, PTSD patients displayed both increased and decreased functional connectivity within the salience network (SN), central executive network (CEN), default mode network (DMN), somato-motor network (SMN), auditory network (AN), and visual network (VN). Furthermore, strengthened connectivity involving the inferior temporal gyrus (ITG) and supplementary motor area (SMA) was negatively correlated with clinical severity in PTSD patients.

Limitations

Given the absence of a healthy control group that never experienced the earthquake, our results cannot be used to compare alterations between the PTSD patients, physically healthy trauma survivors, and healthy controls. In addition, the breathing and heart rates were not monitored in our small sample size of subjects. In future studies, specific task paradigms should be used to reveal perceptual impairments.

Conclusions

These findings suggest that PTSD patients have widespread deficits in both the low-level perceptual and higher-order cognitive networks. Decreased connectivity within the low-level perceptual networks was related to clinical symptoms, which may be associated with traumatic reminders causing attentional bias to negative emotion in response to threatening stimuli and resulting in emotional dysregulation.  相似文献   

20.

Background

Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.

Methodology/Principal Findings

Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.

Conclusions/Significance

These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号