首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Blindsight refers to the rare ability of V1-damaged patients to perform visual tasks such as forced-choice discrimination, even though these patients claim not to consciously see the relevant stimuli. This striking phenomenon can be described in the formal terms of signal detection theory. (i) Blindsight patients use an unusually conservative criterion to detect targets. (ii) In discrimination tasks, their confidence ratings are low and (iii) such confidence ratings poorly predict task accuracy on a trial-by-trial basis. (iv) Their detection capacity (d') is lower than expected based on their performance in forced-choice tasks. We propose a unifying explanation that accounts for these features: that blindsight is due to a failure to represent and update the statistical information regarding the internal visual neural response, i.e. a failure in metacognition. We provide computational simulation data to demonstrate that this model can qualitatively account for the detection theoretic features of blindsight. Because such metacognitive mechanisms are likely to depend on the prefrontal cortex, this suggests that although blindsight is typically due to damage to the primary visual cortex, distal influence to the prefrontal cortex by such damage may be critical. Recent brain imaging evidence supports this view.  相似文献   

2.
In order to evaluate the suitability of signal detection theory methods for assessing the discriminability of foods and beverages, the discriminability of two dairy milk products that differed in fat content was measured with two detection-theoretic methods: the single-interval rating method, and the same-different method. The nominal fat contents of the milk products were 0.1 and 1.6%. Measures of discriminability for three observers were derived by fitting receiver operating characteristics (ROCs) based on equal-variance normal models to the ratings of each observer with a procedure that combined jackknifing and maximum-likelihood estimation. The fitted ROCs provided a good fit to the data indicating that the equal-variance models were appropriate for these tasks. The best-fitting estimates of d' obtained for each task were not significantly different, demonstrating that d' is a measure of sensitivity that is largely independent of the task from which it is determined. However, estimates of proportion correct obtained for each task were shown to be significantly different.  相似文献   

3.
Pitch changes that occur in speech and melodies can be described in terms of contour patterns of rises and falls in pitch and the actual pitches at each point in time. This study investigates whether training can improve the perception of these different features. One group of ten adults trained on a pitch-contour discrimination task, a second group trained on an actual-pitch discrimination task, and a third group trained on a contour comparison task between pitch sequences and their visual analogs. A fourth group did not undergo training. It was found that training on pitch sequence comparison tasks gave rise to improvements in pitch-contour perception. This occurred irrespective of whether the training task required the discrimination of contour patterns or the actual pitch details. In contrast, none of the training tasks were found to improve the perception of the actual pitches in a sequence. The results support psychological models of pitch processing where contour processing is an initial step before actual pitch details are analyzed. Further studies are required to determine whether pitch-contour training is effective in improving speech and melody perception.  相似文献   

4.
The acute behavioral effects of atropine sulfate were assessed using a battery of complex food-reinforced operant tasks that included: temporal response differentiation (TRD, n = 7); delayed matching-to-sample (DMTS, n = 6), progressive ratio (PR, n = 8), incremental repeated acquisition (IRA, n = 8), and conditioned position responding (CPR, n = 8). Performance in these tasks is thought to depend primarily upon specific brain functions such as time perception, short-term memory and attention, motivation, learning, and color and position discrimination, respectively. Atropine sulfate (0.01-0.56 mg/kg iv), given 15-min pretesting, produced significant dose-dependent decreases in the number of reinforcers obtained in all tasks. Response rates decreased significantly at greater than or equal to 0.03 mg/kg for the learning and discrimination tasks, at greater than or equal to 0.10 mg/kg for the motivation and short-term memory and attention tasks, and at greater than or equal to 0.30 mg/kg for the time perception task. Response accuracies were significantly decreased at doses greater than or equal to 0.10 mg/kg for the learning, discrimination, and short-term memory and attention tasks, and at greater than or equal to 0.30 mg/kg for the time perception task. Thus, the order of task sensitivity to any disruption by atropine is learning = color and position discrimination greater than time perception = short-term memory and attention = motivation (IRA = CPR greater than TRD = DMTS = PR). Thus in monkeys, the rates of responding in operant tasks designed to model learning and color and position discrimination were the most sensitive measures to atropine's behavioral effects. Accuracy in these same task was also disrupted but at higher doses. These data support the hypothesis that cholinergic systems play a greater role in the speed (but not accuracy) of performance of our learning and discrimination tasks compared to all other tasks. Accuracy of responding in these and the short-term memory task, all of which involve the use of lights as visual stimuli, was more sensitive to disruption by atropine than those tasks which did not utilize such strong visual stimuli.  相似文献   

5.
The literature on the interaction between visual imagery and visual perception provides conflicting outcomes. Some studies show imagery interferes with perception whereas others show facilitation on perceptual tasks. The effects of visual imagery on a detection task were examined in six experiments. When either a bar image (Experiment 1) or an image of the letter 'l' (Experiment 3) overlapped with the targets, interference was discovered; however, images not overlapping the target did not effect detection (Experiments 2 and 4). Increasing the number of target locations caused the interfering effects of the image to disappear; however, there was no evidence of facilitation (Experiment 5). Physical stimuli interfered with detection whether there was overlap or not (Experiment 6). The results indicate that imagery induced interference may be lessened with more complex visual displays.  相似文献   

6.
Face perception: domain specific, not process specific   总被引:17,自引:0,他引:17  
Yovel G  Kanwisher N 《Neuron》2004,44(5):889-898
Evidence that face perception is mediated by special cognitive and neural mechanisms comes from fMRI studies of the fusiform face area (FFA) and behavioral studies of the face inversion effect. Here, we used these two methods to ask whether face perception mechanisms are stimulus specific, process specific, or both. Subjects discriminated pairs of upright or inverted faces or house stimuli that differed in either the spatial distance among parts (configuration) or the shape of the parts. The FFA showed a much higher response to faces than to houses, but no preference for the configuration task over the part task. Similarly, the behavioral inversion effect was as large in the part task as the configuration task for faces, but absent in both part and configuration tasks for houses. These findings indicate that face perception mechanisms are not process specific for parts or configuration but are domain specific for face stimuli per se.  相似文献   

7.
Predicting the sensory consequences of saccadic eye movements likely plays a crucial role in planning sequences of saccades and in maintaining visual stability despite saccade-caused retinal displacements. Deficits in predictive activity, such as that afforded by a corollary discharge signal, have been reported in patients with schizophrenia, and may lead to the emergence of positive symptoms, in particular delusions of control and auditory hallucinations. We examined whether a measure of delusional thinking in the general, non-clinical population correlated with measures of predictive activity in two oculomotor tasks. The double-step task measured predictive activity in motor control, and the in-flight displacement task measured predictive activity in trans-saccadic visual perception. Forty-one healthy adults performed both tasks and completed a questionnaire to assess delusional thinking. The quantitative measure of predictive activity we obtained correlated with the tendency towards delusional ideation, but only for the motor task, and not the perceptual task: Individuals with higher levels of delusional thinking showed less self-movement information use in the motor task. Variation of the degree of self-generated movement knowledge as a function of the prevalence of delusional ideation in the normal population strongly supports the idea that corollary discharge deficits measured in schizophrenic patients in previous researches are not due to neuroleptic medication. We also propose that this difference in results between the perceptual and the motor tasks may point to a dissociation between corollary discharge for perception and corollary discharge for action.  相似文献   

8.
This paper tests the hypothesis that social presence influences size perception by increasing context sensitivity. Consistent with Allport’s prediction, we expected to find greater context sensitivity in participants who perform a visual task in the presence of other people (i.e., in co-action) than in participants who perform the task in isolation. Supporting this hypothesis, participants performing an Ebbinghaus illusion-based task in co-action showed greater size illusions than those performing the task in isolation. Specifically, participants in a social context had greater difficulty perceiving the correct size of a target circle and ignoring its surroundings. Analyses of delta plot functions suggest a mechanism of interference monitoring, since that when individuals take longer to respond, they are better able to ignore the surrounding circles. However, this type of monitoring interference was not moderated by social presence. We discuss how this lack of moderation might be the reason why the impact of social presence on context sensitivity is able to be detected in tasks such as the Ebbinghaus illusion.  相似文献   

9.
In human visual perception, there is evidence that different visual attributes, such as colour, form and motion, have different neural-processing latencies. Specifically, recent studies have suggested that colour changes are processed faster than motion changes. We propose that the processing latencies should not be considered as fixed quantities for different attributes, but instead depend upon attribute salience and the observer's task. We asked observers to respond to high- and low-salience colour and motion changes in three different tasks. The tasks varied from having a strong motor component to having a strong perceptual component. Increasing salience led to shorter processing times in all three tasks. We also found an interaction between task and attribute: motion was processed more quickly in reaction-time tasks, whereas colour was processed more quickly in more perceptual tasks. Our results caution against making direct comparisons between latencies for processing different visual attributes without equating salience or considering task effects. More-salient attributes are processed faster than less-salient ones, and attributes that are critical for the task are also processed more quickly.  相似文献   

10.
The perception of pictorial gaze cues was examined in long-tailed macaques (Macaca fascicularis). A computerised object-location task was used to explore whether the monkeys would show faster response time to locate a target when its appearance was preceded with congruent as opposed to incongruent gaze cues. Despite existing evidence that macaques preferentially attend to the eyes in facial images and also visually orient with depicted gaze cues, the monkeys did not show faster response times on congruent trials either in response to schematic or photographic stimuli. These findings coincide with those reported for baboons testing with a similar paradigm in which gaze cues preceded a target identification task [Fagot, J., Deruelle, C., 2002. Perception of pictorial gaze by baboons (Papio papio). J. Exp. Psychol. 28, 298-308]. When tested with either pictorial stimuli or interactants, nonhuman primates readily follow gaze but do not seem to use this mechanism to identify a target object; there seems to be some mismatch in performance between attentional changes and manual responses to gaze cues on ostensibly similar tasks.  相似文献   

11.
The perception of the orientation of random-dot patterns was studied using four different matching tasks. Homogeneous, elongated patterns and patterns containing Moiré effects were used. One of the tasks implied linear extrapolation and two others implied linear interpolation of the matching line. The fourth task was identical with those used in previous studies by the authors on this topic. Systematic deviations from the axes of orientation of the patterns were observed for the latter task when compared with the former ones. When a short matching line, implying linear extrapolation, was used performance by subjects tended to be more inaccurate than in the other matching tasks. The linear interpolation tasks, in which the matching line was determined by either two collinear distant short lines or by two distant dots, provided more accurate and stable performance than the other two tasks. The results are discussed from the point of view of global orientation perception derived from an image function of the stimuli.  相似文献   

12.
Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.  相似文献   

13.
In this paper, we investigate a new paradigm for studying the development of the colour ‘signal’ by having observers discriminate and categorize the same set of controlled and calibrated cardinal coloured stimuli. Notably, in both tasks, each observer was free to decide whether two pairs of colors were the same or belonged to the same category. The use of the same stimulus set for both tasks provides, we argue, an incremental behavioural measure of colour processing from detection through discrimination to categorisation. The measured data spaces are different for the two tasks, and furthermore the categorisation data is unique to each observer. In addition, we develop a model which assumes that the principal difference between the tasks is the degree of similarity between the stimuli which has different constraints for the categorisation task compared to the discrimination task. This approach not only makes sense of the current (and associated) data but links the processes of discrimination and categorisation in a novel way and, by implication, expands upon the previous research linking categorisation to other tasks not limited to colour perception.  相似文献   

14.
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks.  相似文献   

15.
Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal simultaneity perception and improves temporal discrimination in audiovisual processing.  相似文献   

16.
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.  相似文献   

17.
In this study, we examined event-related potentials (ERPs) in rats performing a timing task. The ERPs were recorded during a timing task and a control task from five regions (frontal cortex, striatum, hippocampus, thalamus, and cerebellum) that are related to time perception. In the timing task, the rats were required to judge the interval between two tones. This interval could be either 500 or 2000 ms. In the control task, only the 500 ms interval between tones was presented and only one lever was available for responses. Any difference in ERPs between the two tasks was considered to reflect the processes that are related to temporal discrimination. The frontal cortex, striatum, and thalamus yielded concurrent differences in ERPs between the two tasks. The results suggest that these regions might play an important role in temporal discrimination.  相似文献   

18.
Current models of attention, typically claim that vision and audition are limited by a common attentional resource which means that visual performance should be adversely affected by a concurrent auditory task and vice versa. Here, we test this implication by measuring auditory (pitch) and visual (contrast) thresholds in conjunction with cross-modal secondary tasks and find that no such interference occurs. Visual contrast discrimination thresholds were unaffected by a concurrent chord or pitch discrimination, and pitch-discrimination thresholds were virtually unaffected by a concurrent visual search or contrast discrimination task. However, if the dual tasks were presented within the same modality, thresholds were raised by a factor of between two (for visual discrimination) and four (for auditory discrimination). These results suggest that at least for low-level tasks such as discriminations of pitch and contrast, each sensory modality is under separate attentional control, rather than being limited by a supramodal attentional resource. This has implications for current theories of attention as well as for the use of multi-sensory media for efficient informational transmission.  相似文献   

19.
微眼动是视觉注视过程中幅度最大、速度最快的眼动,可以消除由于神经系统适应性而产生的视觉衰退现象,在视觉信息处理过程中发挥着重要作用.基于微眼动与视觉感知功能的相关性,设计实验研究猕猴完成显性、隐性注意任务以及不同难度显性注意任务时,视觉注视情况下微眼动的差异.通过对不同难度显性注意任务下微眼动的参数进行比较,发现随着任务难度的增加,微眼动的幅度、速率和频率都被抑制.另一方面,对比不同类型的视觉感知任务(显性注意和隐性注意),发现在相似的实验范式下,隐性注意对微眼动的频率有明显的抑制作用,但幅度和频率没有得到一致的结果,这表明视觉注意任务类型的不同或将导致猕猴完成任务的策略不同.这些工作将为今后进一步研究微眼动产生的神经机制以及视觉注意过程中眼动的作用机制奠定良好的基础.  相似文献   

20.
In this paper we show that response facilitation in choice reaction tasks achieved by priming the (previously perceived) effect is based on stimulus-response associations rather than on response-effect associations. The reduced key-press response time is not accounted for by earlier established couplings between the key-press movement and its subsequent effect, but instead results from couplings between this effect and the contingent key-release movement. This key-release movement is an intrinsic part of the entire performed response action in each trial of a reaction-time task, and always spontaneously follows the key-press movement. Eliminating the key-release movement from the task leads to the disappearance of the response facilitation, which raises the question whether response-effect associations actually play a role in studies that use the effect-priming paradigm. Together the three experiments presented in the paper cast serious doubts on the claim that action-effect couplings are acquired and utilized by the cognitive system in the service of action selection, and that the priming paradigm by itself can provide convincing evidence for this claim. As a corollary, we question whether the related two-step model for the ideomotor principle holds a satisfying explanation for how anticipation of future states guides action planning. The results presented here may have profound implications for priming studies in other disciplines of psychology as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号