首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Cortical neurons are frequently tuned to several stimulus dimensions, and many cortical areas contain intercalated maps of multiple variables. Relatively little is known about how information is “read out” of these multidimensional maps. For example, how does an organism extract information relevant to the task at hand from neurons that are also tuned to other, irrelevant stimulus dimensions? We addressed this question by employing microstimulation techniques to examine the contribution of disparity-tuned neurons in the middle temporal (MT) visual area to performance on a direction discrimination task. Most MT neurons are tuned to both binocular disparity and the direction of stimulus motion, and MT contains topographic maps of both parameters. We assessed the effect of microstimulation on direction judgments after first characterizing the disparity tuning of each stimulation site. Although the disparity of the stimulus was irrelevant to the required task, we found that microstimulation effects were strongly modulated by the disparity tuning of the stimulated neurons. For two of three monkeys, microstimulation of nondisparity-selective sites produced large biases in direction judgments, whereas stimulation of disparity-selective sites had little or no effect. The binocular disparity was optimized for each stimulation site, and our result could not be explained by variations in direction tuning, response strength, or any other tuning property that we examined. When microstimulation of a disparity-tuned site did affect direction judgments, the effects tended to be stronger at the preferred disparity of a stimulation site than at the nonpreferred disparity, indicating that monkeys can selectively monitor direction columns that are best tuned to an appropriate conjunction of parameters. We conclude that the contribution of neurons to behavior can depend strongly upon tuning to stimulus dimensions that appear to be irrelevant to the current task, and we suggest that these findings are best explained in terms of the strategy used by animals to perform the task.  相似文献   

2.
Cortical neurons are frequently tuned to several stimulus dimensions, and many cortical areas contain intercalated maps of multiple variables. Relatively little is known about how information is “read out” of these multidimensional maps. For example, how does an organism extract information relevant to the task at hand from neurons that are also tuned to other, irrelevant stimulus dimensions? We addressed this question by employing microstimulation techniques to examine the contribution of disparity-tuned neurons in the middle temporal (MT) visual area to performance on a direction discrimination task. Most MT neurons are tuned to both binocular disparity and the direction of stimulus motion, and MT contains topographic maps of both parameters. We assessed the effect of microstimulation on direction judgments after first characterizing the disparity tuning of each stimulation site. Although the disparity of the stimulus was irrelevant to the required task, we found that microstimulation effects were strongly modulated by the disparity tuning of the stimulated neurons. For two of three monkeys, microstimulation of nondisparity-selective sites produced large biases in direction judgments, whereas stimulation of disparity-selective sites had little or no effect. The binocular disparity was optimized for each stimulation site, and our result could not be explained by variations in direction tuning, response strength, or any other tuning property that we examined. When microstimulation of a disparity-tuned site did affect direction judgments, the effects tended to be stronger at the preferred disparity of a stimulation site than at the nonpreferred disparity, indicating that monkeys can selectively monitor direction columns that are best tuned to an appropriate conjunction of parameters. We conclude that the contribution of neurons to behavior can depend strongly upon tuning to stimulus dimensions that appear to be irrelevant to the current task, and we suggest that these findings are best explained in terms of the strategy used by animals to perform the task.  相似文献   

3.
EE Birkett  JB Talcott 《PloS one》2012,7(8):e42820
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.  相似文献   

4.
Perceptual decision making has been widely studied using tasks in which subjects are asked to discriminate a visual stimulus and instructed to report their decision with a movement. In these studies, performance is measured by assessing the accuracy of the participants’ choices as a function of the ambiguity of the visual stimulus. Typically, the reporting movement is considered as a mere means of reporting the decision with no influence on the decision-making process. However, recent studies have shown that even subtle differences of biomechanical costs between movements may influence how we select between them. Here we investigated whether this purely motor cost could also influence decisions in a perceptual discrimination task in detriment of accuracy. In other words, are perceptual decisions only dependent on the visual stimulus and entirely orthogonal to motor costs? Here we show the results of a psychophysical experiment in which human subjects were presented with a random dot motion discrimination task and asked to report the perceived motion direction using movements of different biomechanical cost. We found that the pattern of decisions exhibited a significant bias towards the movement of lower cost, even when this bias reduced performance accuracy. This strongly suggests that motor costs influence decision making in visual discrimination tasks for which its contribution is neither instructed nor beneficial.  相似文献   

5.
In social insects, workers perform a multitude of tasks, such as foraging, nest construction, and brood rearing, without central control of how work is allocated among individuals. It has been suggested that workers choose a task by responding to stimuli gathered from the environment. Response-threshold models assume that individuals in a colony vary in the stimulus intensity (response threshold) at which they begin to perform the corresponding task. Here we highlight the limitations of these models with respect to colony performance in task allocation. First, we show with analysis and quantitative simulations that the deterministic response-threshold model constrains the workers' behavioral flexibility under some stimulus conditions. Next, we show that the probabilistic response-threshold model fails to explain precise colony responses to varying stimuli. Both of these limitations would be detrimental to colony performance when dynamic and precise task allocation is needed. To address these problems, we propose extensions of the response-threshold model by adding variables that weigh stimuli. We test the extended response-threshold model in a foraging scenario and show in simulations that it results in an efficient task allocation. Finally, we show that response-threshold models can be formulated as artificial neural networks, which consequently provide a comprehensive framework for modeling task allocation in social insects.  相似文献   

6.
Aging reduces center-surround antagonism in visual motion processing   总被引:3,自引:0,他引:3  
Betts LR  Taylor CP  Sekuler AB  Bennett PJ 《Neuron》2005,45(3):361-366
Discriminating the direction of motion of a low-contrast pattern becomes easier with increasing stimulus area. However, increasing the size of a high-contrast pattern makes it more difficult for observers to discriminate motion. This surprising result, termed spatial suppression, is thought to be mediated by a form of center-surround suppression found throughout the visual pathway. Here, we examine the counterintuitive hypothesis that aging alters such center-surround interactions in ways that improve performance in some tasks. We found that older observers required briefer stimulus durations than did younger observers to extract information about stimulus direction in conditions using large, high-contrast patterns. We suggest that this age-related improvement in motion discrimination may be linked to reduced GABAergic functioning in the senescent brain, which reduces center-surround suppression in motion-selective neurons.  相似文献   

7.
Our understanding of how the visual system processes motion transparency, the phenomenon by which multiple directions of motion are perceived to coexist in the same spatial region, has grown considerably in the past decade. There is compelling evidence that the process is driven by global-motion mechanisms. Consequently, although transparently moving surfaces are readily segmented over an extended space, the visual system cannot separate two motion signals that coexist in the same local region. A related issue is whether the visual system can detect transparently moving surfaces simultaneously or whether the component signals encounter a serial 'bottleneck' during their processing. Our initial results show that, at sufficiently short stimulus durations, observers cannot accurately detect two superimposed directions; yet they have no difficulty in detecting one pattern direction in noise, supporting the serial-bottleneck scenario. However, in a second experiment, the difference in performance between the two tasks disappears when the component patterns are segregated. This discrepancy between the processing of transparent and non-overlapping patterns may be a consequence of suppressed activity of global-motion mechanisms when the transparent surfaces are presented in the same depth plane. To test this explanation, we repeated our initial experiment while separating the motion components in depth. The marked improvement in performance leads us to conclude that transparent motion signals are represented simultaneously.  相似文献   

8.
Attention to a visual stimulus typically increases the responses of cortical neurons to that stimulus. Because many studies have shown a close relationship between the performance of individual neurons and behavioural performance of animal subjects, it is important to consider how attention affects this relationship. Measurements of behavioural and neuronal performance taken from rhesus monkeys while they performed a motion detection task with two attentional states show that attention alters the relationship between behaviour and neuronal response. Notably, attention affects the relationship differently in different cortical visual areas. This indicates that a close relationship between neuronal and behavioural performance on a given task persists over changes in attentional state only within limited regions of visual cortex.  相似文献   

9.
Subliminal perception studies have shown that one can objectively discriminate a stimulus without subjectively perceiving it. We show how a minimalist framework based on Signal Detection Theory and Bayesian inference can account for this dissociation, by describing subjective and objective tasks with similar decision-theoretic mechanisms. Each of these tasks relies on distinct response classes, and therefore distinct priors and decision boundaries. As a result, they may reach different conclusions. By formalizing, within the same framework, forced-choice discrimination responses, subjective visibility reports and confidence ratings, we show that this decision model suffices to account for several classical characteristics of conscious and unconscious perception. Furthermore, the model provides a set of original predictions on the nonlinear profiles of discrimination performance obtained at various levels of visibility. We successfully test one such prediction in a novel experiment: when varying continuously the degree of perceptual ambiguity between two visual symbols presented at perceptual threshold, identification performance varies quasi-linearly when the stimulus is unseen and in an ‘all-or-none’ manner when it is seen. The present model highlights how conscious and non-conscious decisions may correspond to distinct categorizations of the same stimulus encoded by a high-dimensional neuronal population vector.  相似文献   

10.
By learning to discriminate among visual stimuli, human observers can become experts at specific visual tasks. The same is true for Rhesus monkeys, the major animal model of human visual perception. Here, we systematically compare how humans and monkeys solve a simple visual task. We trained humans and monkeys to discriminate between the members of small natural-image sets. We employed the "Bubbles" procedure to determine the stimulus features used by the observers. On average, monkeys used image features drawn from a diagnostic region covering about 7% +/- 2% of the images. Humans were able to use image features drawn from a much larger diagnostic region covering on average 51% +/- 4% of the images. Similarly for the two species, however, about 2% of the image needed to be visible within the diagnostic region on any individual trial for correct performance. We characterize the low-level image properties of the diagnostic regions and discuss individual differences among the monkeys. Our results reveal that monkeys base their behavior on confined image patches and essentially ignore a large fraction of the visual input, whereas humans are able to gather visual information with greater flexibility from large image regions.  相似文献   

11.

Background

Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat.

Methodology/Principal Findings

Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds.

Conclusions/Significance

Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.  相似文献   

12.
While it is known that some individuals can effectively perform two tasks simultaneously, other individuals cannot. How the brain deals with performing simultaneous tasks remains unclear. In the present study, we aimed to assess which brain areas corresponded to various phenomena in task performance. Nineteen subjects were requested to sequentially perform three blocks of tasks, including two unimodal tasks and one bimodal task. The unimodal tasks measured either visual feature binding or auditory pitch comparison, while the bimodal task required performance of the two tasks simultaneously. The functional magnetic resonance imaging (fMRI) results are compatible with previous studies showing that distinct brain areas, such as the visual cortices, frontal eye field (FEF), lateral parietal lobe (BA7), and medial and inferior frontal lobe, are involved in processing of visual unimodal tasks. In addition, the temporal lobes and Brodmann area 43 (BA43) were involved in processing of auditory unimodal tasks. These results lend support to concepts of modality-specific attention. Compared to the unimodal tasks, bimodal tasks required activation of additional brain areas. Furthermore, while deactivated brain areas were related to good performance in the bimodal task, these areas were not deactivated where the subject performed well in only one of the two simultaneous tasks. These results indicate that efficient information processing does not require some brain areas to be overly active; rather, the specific brain areas need to be relatively deactivated to remain alert and perform well on two tasks simultaneously. Meanwhile, it can also offer a neural basis for biofeedback in training courses, such as courses in how to perform multiple tasks simultaneously.  相似文献   

13.
Seitz AR  Kim R  Shams L 《Current biology : CB》2006,16(14):1422-1427
Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.  相似文献   

14.
Reaction time (RT) and error rate that depend on stimulus duration were measured in a luminance-discrimination reaction time task. Two patches of light with different luminance were presented to participants for ‘short’ (150 ms) or ‘long’ (1 s) period on each trial. When the stimulus duration was ‘short’, the participants responded more rapidly with poorer discrimination performance than they did in the longer duration. The results suggested that different sensory responses in the visual cortices were responsible for the dependence of response speed and accuracy on the stimulus duration during the luminance-discrimination reaction time task. It was shown that the simple winner-take-all-type neural network model receiving transient and sustained stimulus information from the primary visual cortex successfully reproduced RT distributions for correct responses and error rates. Moreover, temporal spike sequences obtained from the model network closely resembled to the neural activity in the monkey prefrontal or parietal area during other visual decision tasks such as motion discrimination and oddball detection tasks.  相似文献   

15.
When humans detect and discriminate visual motion, some neural mechanism extracts the motion information that is embedded in the noisy spatio-temporal stimulus. We show that an ideal mechanism in a motion discrimination experiment cross-correlates the received waveform with the signals to be discriminated. If the human visual system uses such a cross-correlator mechanism, discrimination performance should depend on the cross-correlation between the two signals. Manipulations of the signals' cross-correlation using differences in the speed and phase of moving gratings produced the predicted changes in the performance of human observers. The cross-correlator's motion performance improves linearly as contrast increases and human performance is similar. The ideal cross-correlator can be implemented by passing the stimulus through linear spatio-temporal filters matched to the signals. We propose that directionally selective simple cells in the striate cortex serve as matched filters during motion detection and discrimination.  相似文献   

16.
17.
Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal simultaneity perception and improves temporal discrimination in audiovisual processing.  相似文献   

18.
In human visual perception, there is evidence that different visual attributes, such as colour, form and motion, have different neural-processing latencies. Specifically, recent studies have suggested that colour changes are processed faster than motion changes. We propose that the processing latencies should not be considered as fixed quantities for different attributes, but instead depend upon attribute salience and the observer's task. We asked observers to respond to high- and low-salience colour and motion changes in three different tasks. The tasks varied from having a strong motor component to having a strong perceptual component. Increasing salience led to shorter processing times in all three tasks. We also found an interaction between task and attribute: motion was processed more quickly in reaction-time tasks, whereas colour was processed more quickly in more perceptual tasks. Our results caution against making direct comparisons between latencies for processing different visual attributes without equating salience or considering task effects. More-salient attributes are processed faster than less-salient ones, and attributes that are critical for the task are also processed more quickly.  相似文献   

19.
S Taya  D Windridge  M Osman 《PloS one》2012,7(6):e39060
Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers' beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer's beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior.  相似文献   

20.
This paper reports a comparison between two tasks of visual search. Two observers carried out, in separate blocks, a saccade-to-target task and a manual-target-detection task. The displays, which were identical for the two tasks, consisted of a ring of eight equally spaced Gabor patches. The target could be defined by a difference from the distractors along four possible dimensions: orientation, spatial frequency, contrast or size. These four dimensions were used as variables in separate experiments. In each experiment, performance was measured over an extensive range of values of the particular dimension. Thresholds were thus obtained for the saccade and the manual response tasks. The nature of the response was found to modify the relative visual sensitivity. For orientation differences, manual response performance was better than saccade-to-target performance. The reverse was true for spatial frequency and contrast differences, where saccade-to-target performance was better than manual response performance. We conclude that saccade-selection in a search task draws on different visual information from that used for manual responding in the equivalent task. The two tasks thus differ in more than the different response systems used: the results suggest the action of different underlying neural visual mechanisms as well as different neural motor mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号