首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 214 毫秒
1.

Background

In contrast to traditional views that consider smooth pursuit as a relatively automatic process, evidence has been reported for the importance of attention for accurate pursuit performance. However, the exact role that attention might play in the maintenance of pursuit remains unclear.

Methodology/Principal Findings

We analysed the neuronal activity associated with healthy subjects executing smooth pursuit eye movements (SPEM) during concurrent attentive tracking of a moving sound source, which was either in-phase or in antiphase to the executed eye movements. Assuming that attentional resources must be allocated to the moving sound source, the simultaneous execution of SPEM and auditory tracking in diverging directions should result in increased load on common attentional resources. By using an auditory stimulus as a distractor rather then a visual stimulus we guaranteed that cortical activity cannot be caused by conflicts between two simultaneous visual motion stimuli. Our results revealed that the smooth pursuit task with divided attention led to significantly higher activations bilaterally in the posterior parietal cortex and lateral and medial frontal cortex, presumably containing the parietal, frontal and supplementary eye fields respectively.

Conclusions

The additional cortical activation in these areas is apparently due to the process of dividing attention between the execution of SPEM and the covert tracking of the auditory target. On the other hand, even though attention had to be divided the attentional resources did not seem to be exhausted, since the identification of the direction of the auditory target and the quality of SPEM were unaffected by the congruence between visual and auditory motion stimuli. Finally, we found that this form of task-related attention modulated not only the cortical pursuit network in general but also affected modality specific and supramodal attention regions.  相似文献   

2.

Background

Humans are able to track multiple simultaneously moving objects. A number of factors have been identified that can influence the ease with which objects can be attended and tracked. Here, we explored the possibility that object tracking abilities may be specialized for tracking biological targets such as people.

Methodology/Principal Findings

We used the Multiple Object Tracking (MOT) paradigm to explore whether the high-level biological status of the targets affects the efficiency of attentional selection and tracking. In Experiment 1, we assessed the tracking of point-light biological motion figures. As controls, we used either the same stimuli or point-light letters, presented in upright, inverted or scrambled configurations. While scrambling significantly affected performance for both letters and point-light figures, there was an effect of inversion restricted to biological motion, inverted figures being harder to track. In Experiment 2, we found that tracking performance was equivalent for natural point-light walkers and ‘moon-walkers’, whose implied direction was incongruent with their actual direction of motion. In Experiment 3, we found higher tracking accuracy for inverted faces compared with upright faces. Thus, there was a double dissociation between inversion effects for biological motion and faces, with no inversion effect for our non-biological stimuli (letters, houses).

Conclusions/Significance

MOT is sensitive to some, but not all naturalistic aspects of biological stimuli. There does not appear to be a highly specialized role for tracking people. However, MOT appears constrained by principles of object segmentation and grouping, where effectively grouped, coherent objects, but not necessarily biological objects, are tracked most successfully.  相似文献   

3.

Background

Observers respond more accurately to targets in visual search tasks that share properties with previously presented items, and transient attention can learn featural consistencies on a precue, irrespective of its absolute location.

Methodology/Principal Findings

We investigated whether such attentional benefits also apply to temporal consistencies. Would performance on a precued Vernier acuity discrimination task, followed by a mask, improve if the cue-lead times (CLTs; 50, 100, 150 or 200 ms) remained constant between trials compared to when they changed? The results showed that if CLTs remained constant for a few trials in a row, Vernier acuity performance gradually improved while changes in CLT from one trial to the next led to worse than average discrimination performance. The results show that transient attention can quickly adjust to temporal regularities, similarly to spatial and featural regularities. Further experiments show that this form of learning is not under voluntary control.

Conclusions/Significance

The results add to a growing literature showing how consistency in visual presentation improves visual performance, in this case temporal consistency.  相似文献   

4.
Kim RS  Seitz AR  Shams L 《PloS one》2008,3(1):e1532

Background

Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning.

Methodology/Principle Findings

Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli.

Conclusions/Significance

This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.  相似文献   

5.

Background

The simultaneous tracking and identification of multiple moving objects encountered in everyday life requires one to correctly bind identities to objects. In the present study, we investigated the role of spatial configuration made by multiple targets when observers are asked to track multiple moving objects with distinct identities.

Methodology/Principal Findings

The overall spatial configuration made by the targets was manipulated: In the constant condition, the configuration remained as a virtual convex polygon throughout the tracking, and in the collapsed condition, one of the moving targets (critical target) crossed over an edge of the virtual polygon during tracking, destroying it. Identification performance was higher when the configuration remained intact than when it collapsed (Experiments 1a, 1b, and 2). Moreover, destroying the configuration affected the allocation of dynamic attention: the critical target captured more attention than did the other targets. However, observers were worse at identifying the critical target and were more likely to confuse it with the targets that formed the virtual crossed edge (Experiments 3–5). Experiment 6 further showed that the visual system constructs an overall configuration only by using the targets (and not the distractors); identification performance was not affected by whether the distractor violated the spatial configuration.

Conclusions/Significance

In sum, these results suggest that the visual system may integrate targets (but not distractors) into a spatial configuration during multiple identity tracking, which affects the distribution of dynamic attention and the updating of identity-location binding.  相似文献   

6.

Background

Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it.

Methodology/Principal Findings

From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection.

Conclusions/Significance

A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion.  相似文献   

7.
Ganel T  Freud E  Chajut E  Algom D 《PloS one》2012,7(4):e36253

Background

Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system.

Methodology/Principal Findings

In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical.

Conclusions/Significance

We conclude that human resolution is not fully tapped by perceptually determined thresholds. Grasping likely exhibits greater resolving power than people usually realize.  相似文献   

8.

Background

Vision provides the most salient information with regard to stimulus motion, but audition can also provide important cues that affect visual motion perception. Here, we show that sounds containing no motion or positional cues can induce illusory visual motion perception for static visual objects.

Methodology/Principal Findings

Two circles placed side by side were presented in alternation producing apparent motion perception and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. When the flash onset was synchronized to tones of alternating frequencies, a circle blinking at a fixed location was perceived as lateral motion in the same direction as the previously exposed apparent motion. Furthermore, the effect lasted at least for a few days. The effect was well observed at the retinal position that was previously exposed to apparent motion with tone bursts.

Conclusions/Significance

The present results indicate that strong association between sound sequence and visual motion is easily formed within a short period and that, after forming the association, sounds are able to trigger visual motion perception for a static visual object.  相似文献   

9.

Background

Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered.

Method

Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects.

Results

Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group.

Conclusions

deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.  相似文献   

10.
Heath M  Maraj A  Godbolt B  Binsted G 《PloS one》2008,3(10):e3539

Background

Previous work by our group has shown that the scaling of reach trajectories to target size is independent of obligatory awareness of that target property and that “action without awareness” can persist for up to 2000 ms of visual delay. In the present investigation we sought to determine if the ability to scale reaching trajectories to target size following a delay is related to the pre-computing of movement parameters during initial stimulus presentation or the maintenance of a sensory (i.e., visual) representation for on-demand response parameterization.

Methodology/Principal Findings

Participants completed immediate or delayed (i.e., 2000 ms) perceptual reports and reaching responses to different sized targets under non-masked and masked target conditions. For the reaching task, the limb associated with a trial (i.e., left or right) was not specified until the time of response cuing: a manipulation that prevented participants from pre-computing the effector-related parameters of their response. In terms of the immediate and delayed perceptual tasks, target size was accurately reported during non-masked trials; however, for masked trials only a chance level of accuracy was observed. For the immediate and delayed reaching tasks, movement time as well as other temporal kinematic measures (e.g., times to peak acceleration, velocity and deceleration) increased in relation to decreasing target size across non-masked and masked trials.

Conclusions/Significance

Our results demonstrate that speed-accuracy relations were observed regardless of whether participants were aware (i.e., non-masked trials) or unaware (i.e., masked trials) of target size. Moreover, the equivalent scaling of immediate and delayed reaches during masked trials indicates that a persistent sensory-based representation supports the unconscious and metrical scaling of memory-guided reaching.  相似文献   

11.

Background

Reactions to sensory events sometimes require quick responses whereas at other times they require a high degree of accuracy–usually resulting in slower responses. It is important to understand whether visual processing under different response speed requirements employs different neural mechanisms.

Methodology/Principal Findings

We asked participants to classify visual patterns with different levels of detail as real-world or non-sense objects. In one condition, participants were to respond immediately, whereas in the other they responded after a delay of 1 second. As expected, participants performed more accurately in delayed response trials. This effect was pronounced for stimuli with a high level of detail. These behavioral effects were accompanied by modulations of stimulus related EEG gamma oscillations which are an electrophysiological correlate of early visual processing. In trials requiring speeded responses, early stimulus-locked oscillations discriminated real-world and non-sense objects irrespective of the level of detail. For stimuli with a higher level of detail, oscillatory power in a later time window discriminated real-world and non-sense objects irrespective of response speed requirements.

Conclusions/Significance

Thus, it seems plausible to assume that different response speed requirements trigger different dynamics of processing.  相似文献   

12.

Objectives

To assess positioning accuracy in otosurgery and to test the impact of the two-handed instrument holding technique and the instrument support technique on surgical precision. To test an otologic training model with optical tracking.

Study Design

In total, 14 ENT surgeons in the same department with different levels of surgical experience performed static and dynamic tasks with otologic microinstruments under simulated otosurgical conditions.

Methods

Tip motion of the microinstrument was registered in three dimensions by optical tracking during 10 different tasks simulating surgical steps such as prosthesis crimping and dissection of the middle ear using formalin-fixed temporal bone. Instrument marker trajectories were compared within groups of experienced and less experienced surgeons performing uncompensated or compensated exercises.

Results

Experienced surgeons have significantly better positioning accuracy than novice ear surgeons in terms of mean displacement values of marker trajectories. The instrument support and the two-handed instrument holding techniques significantly reduce surgeons’ tremor. The laboratory set-up presented in this study provides precise feedback for otosurgeons about their surgical skills and proved to be a useful device for otosurgical training.

Conclusions

Simple tremor compensation techniques may offer trainees the potential to improve their positioning accuracy to the level of more experienced surgeons. Training in an experimental otologic environment with optical tracking may aid acquisition of technical skills in middle ear surgery and potentially shorten the learning curve. Thus, simulated exercises of surgical steps should be integrated into the training of otosurgeons.  相似文献   

13.

Background

Experience can alter how objects are represented in the visual cortex. But experience can take different forms. It is unknown whether the kind of visual experience systematically alters the nature of visual cortical object representations.

Methodology/Principal Findings

We take advantage of different training regimens found to produce qualitatively different types of perceptual expertise behaviorally in order to contrast the neural changes that follow different kinds of visual experience with the same objects. Two groups of participants went through training regimens that required either subordinate-level individuation or basic-level categorization of a set of novel, artificial objects, called “Ziggerins”. fMRI activity of a region in the right fusiform gyrus increased after individuation training and was correlated with the magnitude of configural processing of the Ziggerins observed behaviorally. In contrast, categorization training caused distributed changes, with increased activity in the medial portion of the ventral occipito-temporal cortex relative to more lateral areas.

Conclusions/Significance

Our results demonstrate that the kind of experience with a category of objects can systematically influence how those objects are represented in visual cortex. The demands of prior learning experience therefore appear to be one factor determining the organization of activity patterns in visual cortex.  相似文献   

14.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

15.

Background

When viewing complex scenes, East Asians attend more to contexts whereas Westerners attend more to objects, reflecting cultural differences in holistic and analytic visual processing styles respectively. This eye-tracking study investigated more specific mechanisms and the robustness of these cultural biases in visual processing when salient changes in the objects and backgrounds occur in complex pictures.

Methodology/Principal Findings

Chinese Singaporean (East Asian) and Caucasian US (Western) participants passively viewed pictures containing selectively changing objects and background scenes that strongly captured participants'' attention in a data-driven manner. We found that although participants from both groups responded to object changes in the pictures, there was still evidence for cultural divergence in eye-movements. The number of object fixations in the US participants was more affected by object change than in the Singapore participants. Additionally, despite the picture manipulations, US participants consistently maintained longer durations for both object and background fixations, with eye-movements that generally remained within the focal objects. In contrast, Singapore participants had shorter fixation durations with eye-movements that alternated more between objects and backgrounds.

Conclusions/Significance

The results demonstrate a robust cultural bias in visual processing even when external stimuli draw attention in an opposite manner to the cultural bias. These findings also extend previous studies by revealing more specific, but consistent, effects of culture on the different aspects of visual attention as measured by fixation duration, number of fixations, and saccades between objects and backgrounds.  相似文献   

16.

Background

Recently, activation-dependant structural brain plasticity in humans has been demonstrated in adults after three months of training a visio-motor skill. Learning three-ball cascade juggling was associated with a transient and highly selective increase in brain gray matter in the occipito-temporal cortex comprising the motion sensitive area hMT/V5 bilaterally. However, the exact time-scale of usage-dependant structural changes occur is still unknown. A better understanding of the temporal parameters may help to elucidate to what extent this type of cortical plasticity contributes to fast adapting cortical processes that may be relevant to learning.

Principal Findings

Using a 3 Tesla scanner and monitoring whole brain structure we repeated and extended our original study in 20 healthy adult volunteers, focussing on the temporal aspects of the structural changes and investigated whether these changes are performance or exercise dependant. The data confirmed our earlier observation using a mean effects analysis and in addition showed that learning to juggle can alter gray matter in the occipito-temporal cortex as early as after 7 days of training. Neither performance nor exercise alone could explain these changes.

Conclusion

We suggest that the qualitative change (i.e. learning of a new task) is more critical for the brain to change its structure than continued training of an already-learned task.  相似文献   

17.

Background

Pharmacological studies suggest that cholinergic neurotransmission mediates increases in attentional effort in response to high processing load during attention demanding tasks [1].

Methodology/Principal Findings

In the present study we tested whether individual variation in CHRNA4, a gene coding for a subcomponent in α4β2 nicotinic receptors in the human brain, interacted with processing load in multiple-object tracking (MOT) and visual search (VS). We hypothesized that the impact of genotype would increase with greater processing load in the MOT task. Similarly, we predicted that genotype would influence performance under high but not low load in the VS task. Two hundred and two healthy persons (age range = 39–77, Mean = 57.5, SD = 9.4) performed the MOT task in which twelve identical circular objects moved about the display in an independent and unpredictable manner. Two to six objects were designated as targets and the remaining objects were distracters. The same observers also performed a visual search for a target letter (i.e. X or Z) presented together with five non-targets while ignoring centrally presented distracters (i.e. X, Z, or L). Targets differed from non-targets by a unique feature in the low load condition, whereas they shared features in the high load condition. CHRNA4 genotype interacted with processing load in both tasks. Homozygotes for the T allele (N = 62) had better tracking capacity in the MOT task and identified targets faster in the high load trials of the VS task.

Conclusion

The results support the hypothesis that the cholinergic system modulates attentional effort, and that common genetic variation can be used to study the molecular biology of cognition.  相似文献   

18.

Background

An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.

Methodology/Principal Findings

Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.

Conclusions/Significance

The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.  相似文献   

19.
Xue G  Mei L  Chen C  Lu ZL  Poldrack RA  Dong Q 《PloS one》2010,5(10):e13204

Background

The left midfusiform and adjacent regions have been implicated in processing and memorizing familiar words, yet its role in memorizing novel characters has not been well understood.

Methodology/Principal Findings

Using functional MRI, the present study examined the hypothesis that the left midfusiform is also involved in memorizing novel characters and spaced learning could enhance the memory by enhancing the left midfusiform activity during learning. Nineteen native Chinese readers were scanned while memorizing the visual form of 120 Korean characters that were novel to the subjects. Each character was repeated four times during learning. Repetition suppression was manipulated by using two different repetition schedules: massed learning and spaced learning, pseudo-randomly mixed within the same scanning session. Under the massed learning condition, the four repetitions were consecutive (with a jittered inter-repetition interval to improve the design efficiency). Under the spaced learning condition, the four repetitions were interleaved with a minimal inter-repetition lag of 6 stimuli. Spaced learning significantly improved participants'' performance during the recognition memory test administered one hour after the scan. Stronger left midfusiform and inferior temporal gyrus activities during learning (summed across four repetitions) were associated with better memory of the characters, based on both within- and cross-subjects analyses. Compared to massed learning, spaced learning significantly reduced neural repetition suppression and increased the overall activities in these regions, which were associated with better memory for novel characters.

Conclusions/Significance

These results demonstrated a strong link between cortical activity in the left midfusiform and memory for novel characters, and thus challenge the visual word form area (VWFA) hypothesis. Our results also shed light on the neural mechanisms of the spacing effect in memorizing novel characters.  相似文献   

20.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号