首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Theories of categorization make different predictions about the underlying processes used to represent categories. Episodic theories suggest that categories are represented in memory by storing previously encountered exemplars in memory. Prototype theories suggest that categories are represented in the form of a prototype independently of memory. A number of studies that show dissociations between categorization and recognition are often cited as evidence for the prototype account. These dissociations have compared recognition judgements made to one set of items to categorization judgements to a different set of items making a clear interpretation difficult. Instead of using different stimuli for different tests this experiment compares the processes by which participants make decisions about category membership in a prototype-distortion task and with recognition decisions about the same set of stimuli by examining the Event Related Potentials (ERPs) associated with them.

Method

Sixty-three participants were asked to make categorization or recognition decisions about stimuli that either formed an artificial category or that were category non-members. We examined the ERP components associated with both kinds of decision for pre-exposed and control participants.

Conclusion

In contrast to studies using different items we observed no behavioural differences between the two kinds of decision; participants were equally able to distinguish category members from non-members, regardless of whether they were performing a recognition or categorisation judgement. Interestingly, this did not interact with prior-exposure. However, the ERP data demonstrated that the early visual evoked response that discriminated category members from non-members was modulated by which judgement participants performed and whether they had been pre-exposed to category members. We conclude from this that any differences between categorization and recognition reflect differences in the information that participants focus on in the stimuli to make the judgements at test, rather than any differences in encoding or process.  相似文献   

2.
A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime–probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime–probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage.  相似文献   

3.
有研究提出了否定加工的两阶段模拟假说,认为否定加工的第一阶段是模拟被否定的状态,第二阶段是整合否定以模拟真实状态.为了对该假说进行检验,本研究采用ERP技术探讨了20名被试完成类别验证任务时的脑内时程动态变化.任务有4种条件:肯定类别对(例如,蔬菜-白菜),肯定无关对(例如,蔬菜-蜜蜂),否定类别对(例如,蔬菜-蜜蜂)和否定无关对(例如,蔬菜-白菜).其中类别上画有一横线表示"不是该类别的事物",例如,蔬菜表示"不是蔬菜".结果表明,否定类别对比否定无关对和肯定类别对都引发了更负的N400,这说明,否定加工中建立了被否定状态的表征,支持了两阶段模拟假说.此外,肯定类别对比否定类别对引发了更正的LPC,这一结果揭示了两阶段模拟假说中第二阶段即整合否定模拟真实状态的神经机制.可能是因为肯定类别关系的加工涉及再认和记忆信息的提取,而否定类别关系的加工如两阶段模拟假说所预期的包含两个阶段,涉及推理过程,因而前者比后者引发了更正的LPC.  相似文献   

4.
Barca L  Pezzulo G 《PloS one》2012,7(4):e35932
Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called "lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as "lexical" or "non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms.  相似文献   

5.

Background

Since the pioneering study by Rosch and colleagues in the 70s, it is commonly agreed that basic level perceptual categories (dog, chair…) are accessed faster than superordinate ones (animal, furniture…). Nevertheless, the speed at which objects presented in natural images can be processed in a rapid go/no-go visual superordinate categorization task has challenged this “basic level advantage”.

Principal Findings

Using the same task, we compared human processing speed when categorizing natural scenes as containing either an animal (superordinate level), or a specific animal (bird or dog, basic level). Human subjects require an additional 40–65 ms to decide whether an animal is a bird or a dog and most errors are induced by non-target animals. Indeed, processing time is tightly linked with the type of non-targets objects. Without any exemplar of the same superordinate category to ignore, the basic level category is accessed as fast as the superordinate category, whereas the presence of animal non-targets induces both an increase in reaction time and a decrease in accuracy.

Conclusions and Significance

These results support the parallel distributed processing theory (PDP) and might reconciliate controversial studies recently published. The visual system can quickly access a coarse/abstract visual representation that allows fast decision for superordinate categorization of objects but additional time-consuming visual analysis would be necessary for a decision at the basic level based on more detailed representations.  相似文献   

6.
The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects'' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects'' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.  相似文献   

7.

Background

Emotion can either facilitate or impair memory, depending on what, when and how memory is tested and whether the paradigm at hand is administered as a working memory (WM) or a long-term memory (LTM) task. Whereas emotionally arousing single stimuli are more likely to be remembered, memory for the relationship between two or more component parts (i.e., relational memory) appears to be worse in the presence of emotional stimuli, at least in some relational memory tasks. The current study investigated the effects of both valence (neutral vs. positive vs. negative) and arousal (low vs. high) in an inter-item WM binding and LTM task.

Methodology/Principal Findings

A five-pair delayed-match-to-sample (WM) task was administered. In each trial, study pairs consisted of one neutral picture and a second picture of which the emotional qualities (valence and arousal levels) were manipulated. These pairs had to be remembered across a delay interval of 10 seconds. This was followed by a probe phase in which five pairs were tested. After completion of this task, an unexpected single item LTM task as well as an LTM task for the pairs was assessed. As expected, emotional arousal impaired WM processing. This was reflected in lower accuracy for pairs consisting of high-arousal pictures compared to pairs with low-arousal pictures. A similar effect was found for the associative LTM task. However, the arousal effect was modulated by affective valence for the WM but not the LTM task; pairs with low-arousal negative pictures were not processed as well in the WM task. No significant differences were found for the single-item LTM task.

Conclusions/Significance

The present study provides additional evidence that processes during initial perception/encoding and post-encoding processes, the time interval between study and test and the interaction between valence and arousal might modulate the effects of “emotion” on associative memory.  相似文献   

8.
The present study uses the N400 component of event-related potentials (ERPs) as a processing marker of single spoken words presented during sleep. Thirteen healthy volunteers participated in the study. The auditory ERPs were registered in response to a semantic priming paradigm made up of pairs of words (50% related, 50% unrelated) presented in the waking state and during sleep stages II, III–IV and REM. The amplitude, latency and scalp distribution parameters of the negativity observed during stage II and the REM stage were contrasted with the results obtained in the waking state. The `N400-like' effect elicited in these stages of sleep showed a mean amplitude for pairs of unrelated words significantly greater than for related pairs and an increment of latency. These results suggest that during these sleep stages a semantic priming effect is maintained actively although the lexical processing time increases.  相似文献   

9.
Pronunciation variation is ubiquitous in the speech signal. Different models of lexical representation have been put forward to deal with speech variability, which differ in the level as well as the nature of mental representation. We present the first mismatch negativity (MMN) study investigating the effect of allophonic variation on the mental representation and neural processing of lexical tones. Native speakers of Standard Chinese (SC) participated in an oddball electroencephalography (EEG) experiment. All stimuli have the same segments (ma) but different lexical tones: level [T1], rising [T2], and dipping [T3]. In connected speech with a T3T3 sequence, the first T3 may undergo allophonic change and is produced with a rising pitch contour (T3V), similar to the lexical T2 pitch contour. Four oddball conditions were constructed (T1/T3, T3/T1, T2/T3, T3/T2; standard/deviant). All four conditions elicited MMN effects, with the T1–T3 pair eliciting comparable MMNs, but the T2–T3 pair asymmetrical MMN effects. There were significantly greater and earlier MMN effects in the T2/T3 condition than that in the reversed T3/T2 condition. Furthermore, the T3/T2 condition showed more rightward MMN effects than the T2/T3 condition and the T1–T3 pair. Such asymmetries suggest co-activation of long-term memory representations of both T3 and T3V when T3 serves as the standard. The acoustic similarity between the activated T3V (by the standard T3) and the incoming deviant stimulus T2 induces acoustic processing of the tonal contrast in the T3/T2 condition, similar to that of within-category lexical tone processing, which is in contrast to the processing of between-category lexical tones observed in the T2/T3, T1/T3, and T3/T1 conditions.  相似文献   

10.
11.
The influence of attention on memorizing related items and on available long-term memory (ALTM) was explored, showing that N400 of no-memory items was more negative than that of the memory item. The results of the category comparison task indicated that information processing under attention-driven in WM determined the availability of related long-term memory, i.e., specific content, which was formerly concerned or ignored, yielding different indirect semantic priming effects. These indicate that the orientation of conceptual attention leads the related representations of LTM to diverse activation patterns, supporting the activation-based model.  相似文献   

12.
The underlying specificity of visual object categorization and discrimination can be elucidated by studying different types of repetition priming. Here we focused on this issue in face processing. We investigated category priming (i.e. the prime and target stimuli represent different exemplars of the same object category) and item priming (i.e. the prime and target stimuli are exactly the same image), using an immediate repetition paradigm. Twenty-three subjects were asked to respond as fast and accurately as possible to categorize whether the target stimulus was a face or a building image, but to ignore the prime stimulus. We recorded event-related potentials (ERPs) and reaction times (RTs) simultaneously. The RT data showed significant effects of category priming in both face trials and building trials, as well as a significant effect of item priming in face trials. With respect to the ERPs, in face trials, no priming effect was observed at the P100 stage, whereas a category priming effect emerged at the N170 stage, and an item priming effect at the P200 stage. In contrast, in building trials, priming effects occurred already at the P100 stage. Our results indicated that distinct neural mechanisms underlie separable kinds of immediate repetition priming in face processing.  相似文献   

13.
This article presents the NeoHelp visual stimulus set created to facilitate investigation of need-of-help recognition with clinical and normative populations of different ages, including children. Need-of-help recognition is one aspect of socioemotional development and a necessary precondition for active helping. The NeoHelp consists of picture pairs showing everyday situations: The first item in a pair depicts a child needing help to achieve a goal; the second one shows the child achieving the goal. Pictures of birds in analogue situations are also included. These control stimuli enable implementation of a human-animal categorization task which serves to separate behavioral correlates specific to need-of-help recognition from general differentiation processes. It is a concern in experimental research to ensure that results do not relate to systematic perceptual differences when comparing responses to categories of different content. Therefore, we not only derived the NeoHelp-pictures within a pair from one another by altering as little as possible, but also assessed their perceptual similarity empirically. We show that NeoHelp-picture pairs are very similar regarding low-level perceptual properties across content categories. We obtained data from 60 children in a broad age range (4 to 13 years) for three different paradigms, in order to assess whether the intended categorization and differentiation could be observed reliably in a normative population. Our results demonstrate that children can differentiate the pictures'' content regarding both need-of-help category as well as species as intended in spite of the high perceptual similarities. We provide standard response characteristics (hit rates and response times) that are useful for future selection of stimuli and comparison of results across studies. We show that task requirements coherently determine which aspects of the pictures influence response characteristics. Thus, we present NeoHelp, the first open-access standardized visual stimuli set for investigation of need-of-help recognition and invite researchers to use and extend it.  相似文献   

14.
To characterize the functional role of the left-ventral occipito-temporal cortex (lvOT) during reading in a quantitatively explicit and testable manner, we propose the lexical categorization model (LCM). The LCM assumes that lvOT optimizes linguistic processing by allowing fast meaning access when words are familiar and filtering out orthographic strings without meaning. The LCM successfully simulates benchmark results from functional brain imaging described in the literature. In a second evaluation, we empirically demonstrate that quantitative LCM simulations predict lvOT activation better than alternative models across three functional magnetic resonance imaging studies. We found that word-likeness, assumed as input into a lexical categorization process, is represented posteriorly to lvOT, whereas a dichotomous word/non-word output of the LCM could be localized to the downstream frontal brain regions. Finally, training the process of lexical categorization resulted in more efficient reading. In sum, we propose that word recognition in the ventral visual stream involves word-likeness extraction followed by lexical categorization before one can access word meaning.  相似文献   

15.

Background

Classic work on visual short-term memory (VSTM) suggests that people store a limited amount of items for subsequent report. However, when human observers are cued to shift attention to one item in VSTM during retention, it seems as if there is a much larger representation, which keeps additional items in a more fragile VSTM store. Thus far, it is not clear whether the capacity of this fragile VSTM store indeed exceeds the traditional capacity limits of VSTM. The current experiments address this issue and explore the capacity, stability, and duration of fragile VSTM representations.

Methodology/Principal Findings

We presented cues in a change-detection task either just after off-set of the memory array (iconic-cue), 1,000 ms after off-set of the memory array (retro-cue) or after on-set of the probe array (post-cue). We observed three stages in visual information processing 1) iconic memory with unlimited capacity, 2) a four seconds lasting fragile VSTM store with a capacity that is at least a factor of two higher than 3) the robust and capacity-limited form of VSTM. Iconic memory seemed to depend on the strength of the positive after-image resulting from the memory display and was virtually absent under conditions of isoluminance or when intervening light masks were presented. This suggests that iconic memory is driven by prolonged retinal activation beyond stimulus duration. Fragile VSTM representations were not affected by light masks, but were completely overwritten by irrelevant pattern masks that spatially overlapped the memory array.

Conclusions/Significance

We find that immediately after a stimulus has disappeared from view, subjects can still access information from iconic memory because they can see an after-image of the display. After that period, human observers can still access a substantial, but somewhat more limited amount of information from a high-capacity, but fragile VSTM that is overwritten when new items are presented to the eyes. What is left after that is the traditional VSTM store, with a limit of about four objects. We conclude that human observers store more sustained representations than is evident from standard change detection tasks and that these representations can be accessed at will.  相似文献   

16.
Marois R  Yi DJ  Chun MM 《Neuron》2004,41(3):465-472
Cognitive models of attention propose that visual perception is a product of two stages of visual processing: early operations permit rapid initial categorization of the visual world, while later attention-demanding capacity-limited stages are necessary for the conscious report of the stimuli. Here we used the attentional blink paradigm and fMRI to neurally distinguish these two stages of vision. Subjects detected a face target and a scene target presented rapidly among distractors at fixation. Although the second, scene target frequently went undetected by the subjects, it nonetheless activated regions of the medial temporal cortex involved in high-level scene representations, the parahippocampal place area (PPA). This PPA activation was amplified when the stimulus was consciously perceived. By contrast, the frontal cortex was activated only when scenes were successfully reported. These results suggest that medial temporal cortex permits rapid categorization of the visual input, while the frontal cortex is part of a capacity-limited attentional bottleneck to conscious report.  相似文献   

17.
The ultrastructure of whole X-Y pairs has been reconstructed by serial sectioning and model building. Seven X-Y pairs were completely reconstructed and the lengths of the cores of the sex chromosomes were measured. These X-Y pairs corresponded to zygonema, early, middle and late pachynema. Special regions of the X-Y pair were reconstructed from thinner sections. — It has been shown that two cores exist in the sex pair during the cited stages, and that their lengths and morphology are rather constant in specific stages. The long core averages 8.9 in length and the short core is 3.5 long. Both cores have a common end region in which a synaptonemal complex is formed from zygonema up to midpachynema. This synaptonemal complex shortens progressively up to mid-pachynema and at late pachynema becomes obliterated. Each core has a free end touching the nuclear membrane. During mid-pachynema an anomalous synaptonemal complex is developed on most of the length of the long core. This complex is asymmetric and disappears at late pachynema. The meaning of the cores and the complexes are discussed, and the existence of a homologous region in the X-Y pair of the mouse is interpreted to be proved.  相似文献   

18.
Categorization is an important cognitive process. However, the correct categorization of a stimulus is often challenging because categories can have overlapping boundaries. Whereas perceptual categorization has been extensively studied in vision, the analogous phenomenon in audition has yet to be systematically explored. Here, we test whether and how human subjects learn to use category distributions and prior probabilities, as well as whether subjects employ an optimal decision strategy when making auditory-category decisions. We asked subjects to classify the frequency of a tone burst into one of two overlapping, uniform categories according to the perceived tone frequency. We systematically varied the prior probability of presenting a tone burst with a frequency originating from one versus the other category. Most subjects learned these changes in prior probabilities early in testing and used this information to influence categorization. We also measured each subject''s frequency-discrimination thresholds (i.e., their sensory uncertainty levels). We tested each subject''s average behavior against variations of a Bayesian model that either led to optimal or sub-optimal decision behavior (i.e. probability matching). In both predicting and fitting each subject''s average behavior, we found that probability matching provided a better account of human decision behavior. The model fits confirmed that subjects were able to learn category prior probabilities and approximate forms of the category distributions. Finally, we systematically explored the potential ways that additional noise sources could influence categorization behavior. We found that an optimal decision strategy can produce probability-matching behavior if it utilized non-stationary category distributions and prior probabilities formed over a short stimulus history. Our work extends previous findings into the auditory domain and reformulates the issue of categorization in a manner that can help to interpret the results of previous research within a generative framework.  相似文献   

19.

Background

Decoding of frequency-modulated (FM) sounds is essential for phoneme identification. This study investigates selectivity to FM direction in the human auditory system.

Methodology/Principal Findings

Magnetoencephalography was recorded in 10 adults during a two-tone adaptation paradigm with a 200-ms interstimulus-interval. Stimuli were pairs of either same or different frequency modulation direction. To control that FM repetition effects cannot be accounted for by their on- and offset properties, we additionally assessed responses to pairs of unmodulated tones with either same or different frequency composition. For the FM sweeps, N1m event-related magnetic field components were found at 103 and 130 ms after onset of the first (S1) and second stimulus (S2), respectively. This was followed by a sustained component starting at about 200 ms after S2. The sustained response was significantly stronger for stimulation with the same compared to different FM direction. This effect was not observed for the non-modulated control stimuli.

Conclusions/Significance

Low-level processing of FM sounds was characterized by repetition enhancement to stimulus pairs with same versus different FM directions. This effect was FM-specific; it did not occur for unmodulated tones. The present findings may reflect specific interactions between frequency separation and temporal distance in the processing of consecutive FM sweeps.  相似文献   

20.
When and how do infants develop a semantic system of words that are related to each other? We investigated word–word associations in early lexical development using an adaptation of the inter-modal preferential looking task where word pairs (as opposed to single target words) were used to direct infants’ attention towards a target picture. Two words (prime and target) were presented in quick succession after which infants were presented with a picture pair (target and distracter). Prime–target word pairs were either semantically and associatively related or unrelated; the targets were either named or unnamed. Experiment 1 demonstrated a lexical–semantic priming effect for 21-month olds but not for 18-month olds: unrelated prime words interfered with linguistic target identification for 21-month olds. Follow-up experiments confirmed the interfering effects of unrelated prime words and identified the existence of repetition priming effects as young as 18 months of age. The results of these experiments indicate that infants have begun to develop semantic–associative links between lexical items as early as 21 months of age.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号