首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Human subjects are proficient at tracking the mean and variance of rewards and updating these via prediction errors. Here, we addressed whether humans can also learn about higher-order relationships between distinct environmental outcomes, a defining ecological feature of contexts where multiple sources of rewards are available. By manipulating the degree to which distinct outcomes are correlated, we show that subjects implemented an explicit model-based strategy to learn the associated outcome correlations and were adept in using that information to dynamically adjust their choices in a task that required a minimization of outcome variance. Importantly, the experimentally generated outcome correlations were explicitly represented neuronally in right midinsula with a learning prediction error signal expressed in rostral anterior cingulate cortex. Thus, our data show that the human brain represents higher-order correlation structures between rewards, a core adaptive ability whose immediate benefit is optimized sampling.  相似文献   

2.
A fundamental challenge in social cognition is how humans learn another person's values to predict their decision-making behavior. This form of learning is often assumed to require simulation of the other by direct recruitment of one's own valuation process to model the other's process. However, the cognitive and neural mechanism of simulation learning is not known. Using behavior, modeling, and fMRI, we show that simulation involves two learning signals in a hierarchical arrangement. A simulated-other's reward prediction error processed in ventromedial prefrontal cortex mediated simulation by direct recruitment, being identical for valuation of the self and simulated-other. However, direct recruitment was insufficient for learning, and also required observation of the other's choices to generate?a simulated-other's action prediction error encoded in dorsomedial/dorsolateral prefrontal cortex. These findings show that simulation uses a core prefrontal circuit for modeling the other's valuation to generate prediction and an adjunct circuit for tracking behavioral variation to refine prediction.  相似文献   

3.
Previous studies showed that the understanding of others'' basic emotional experiences is based on a “resonant” mechanism, i.e., on the reactivation, in the observer''s brain, of the cerebral areas associated with those experiences. The present study aimed to investigate whether the same neural mechanism is activated both when experiencing and attending complex, cognitively-generated, emotions. A gambling task and functional-Magnetic-Resonance-Imaging (fMRI) were used to test this hypothesis using regret, the negative cognitively-based emotion resulting from an unfavorable counterfactual comparison between the outcomes of chosen and discarded options. Do the same brain structures that mediate the experience of regret become active in the observation of situations eliciting regret in another individual? Here we show that observing the regretful outcomes of someone else''s choices activates the same regions that are activated during a first-person experience of regret, i.e. the ventromedial prefrontal cortex, anterior cingulate cortex and hippocampus. These results extend the possible role of a mirror-like mechanism beyond basic emotions.  相似文献   

4.
Reward prediction errors (RPEs) and risk preferences have two things in common: both can shape decision making behavior, and both are commonly associated with dopamine. RPEs drive value learning and are thought to be represented in the phasic release of striatal dopamine. Risk preferences bias choices towards or away from uncertainty; they can be manipulated with drugs that target the dopaminergic system. Based on the common neural substrate, we hypothesize that RPEs and risk preferences are linked on the level of behavior as well. Here, we develop this hypothesis theoretically and test it empirically. First, we apply a recent theory of learning in the basal ganglia to predict how RPEs influence risk preferences. We find that positive RPEs should cause increased risk-seeking, while negative RPEs should cause risk-aversion. We then test our behavioral predictions using a novel bandit task in which value and risk vary independently across options. Critically, conditions are included where options vary in risk but are matched for value. We find that our prediction was correct: participants become more risk-seeking if choices are preceded by positive RPEs, and more risk-averse if choices are preceded by negative RPEs. These findings cannot be explained by other known effects, such as nonlinear utility curves or dynamic learning rates.  相似文献   

5.
The acknowledged importance of uncertainty in economic decision making has stimulated the search for neural signals that could influence learning and inform decision mechanisms. Current views distinguish two forms of uncertainty, namely risk and ambiguity, depending on whether the probability distributions of outcomes are known or unknown. Behavioural neurophysiological studies on dopamine neurons revealed a risk signal, which covaried with the standard deviation or variance of the magnitude of juice rewards and occurred separately from reward value coding. Human imaging studies identified similarly distinct risk signals for monetary rewards in the striatum and orbitofrontal cortex (OFC), thus fulfilling a requirement for the mean variance approach of economic decision theory. The orbitofrontal risk signal covaried with individual risk attitudes, possibly explaining individual differences in risk perception and risky decision making. Ambiguous gambles with incomplete probabilistic information induced stronger brain signals than risky gambles in OFC and amygdala, suggesting that the brain's reward system signals the partial lack of information. The brain can use the uncertainty signals to assess the uncertainty of rewards, influence learning, modulate the value of uncertain rewards and make appropriate behavioural choices between only partly known options.  相似文献   

6.
Recent decisions about actions and goals can have effects on future choices. Several studies have shown an effect of the previous trial history on neural activity in a subsequent trial. Often, but not always, these effects originate from task requirements that make it necessary to maintain access to previous trial information to make future decisions. Maintaining the information about recent decisions and their outcomes can play an important role in both adapting to new contingencies and learning. Previous goal decisions must be distinguished from goals that are currently being planned to avoid perseveration or more general errors. Output monitoring is probably based on this separation of accomplished past goals from pending future goals that are being pursued. Behaviourally, it has been shown that the history context can influence the location, error rate and latency of successive responses. We will review the neurophysiological studies in the literature, including data from our laboratory, which support a role for the frontal lobe in tracking previous goal selections and outputs when new goals need to be accomplished.  相似文献   

7.
Auditory experience is critical for the acquisition and maintenance of learned vocalizations in both humans and songbirds. Despite the central role of auditory feedback in vocal learning and maintenance, where and how auditory feedback affects neural circuits important to vocal control remain poorly understood. Recent studies of singing birds have uncovered neural mechanisms by which feedback perturbations affect vocal plasticity and also have identified feedback-sensitive neurons at or near sites of auditory and vocal motor interaction. Additionally, recent studies in marmosets have underscored that even in the absence of vocal learning, vocalization remains flexible in the face of changing acoustical environments, pointing to rapid interactions between auditory and vocal motor systems. Finally, recent studies show that a juvenile songbird's initial auditory experience of a song model has long-lasting effects on sensorimotor neurons important to vocalization, shedding light on how auditory memories and feedback interact to guide vocal learning.  相似文献   

8.
Human decision-making is driven by subjective values assigned to alternative choice options. These valuations are based on reward cues. It is unknown, however, whether complex reward cues, such as brand logos, may bias the neural encoding of subjective value in unrelated decisions. In this functional magnetic resonance imaging (fMRI) study, we subliminally presented brand logos preceding intertemporal choices. We demonstrated that priming biased participants' preferences towards more immediate rewards in the subsequent temporal discounting task. This was associated with modulations of the neural encoding of subjective values of choice options in a network of brain regions, including but not restricted to medial prefrontal cortex. Our findings demonstrate the general susceptibility of the human decision making system to apparently incidental contextual information. We conclude that the brain incorporates seemingly unrelated value information that modifies decision making outside the decision-maker's awareness.  相似文献   

9.
Monitoring of selections of visual stimuli and the primate frontal cortex.   总被引:10,自引:0,他引:10  
This investigation shows that lesions confined to the middle sector of the dorsolateral frontal cortex, i.e. cytoarchitectonic areas 46 and 9, cause a striking impairment in the ability of non-human primates to recall which one from a set of stimuli they chose, without in any way affecting their ability to recognize that they had previously seen those stimuli. By contrast, lesions placed within the adjacent posterior dorsolateral frontal cortex affect neither recognition of visual stimuli nor recall of prior choices. These findings delineate the mid-dorsolateral frontal cortex as a critical component of a neural system mediating the monitoring of self-generated responses.  相似文献   

10.
Autism is a neurodevelopmental disorder characterized by impairments in social interaction, verbal communication and repetitive behaviors. BTBR mouse is currently used as a model for understanding mechanisms that may be responsible for the pathogenesis of autism. Growing evidence suggests that Ras/Raf/ERK1/2 signaling plays death-promoting apoptotic roles in neural cells. Recent studies showed a possible association between neural cell death and autism. In addition, two studies reported that a deletion of a locus on chromosome 16, which includes the MAPK3 gene that encodes ERK1, is associated with autism. We thus hypothesized that Ras/Raf/ERK1/2 signaling could be abnormally regulated in the brain of BTBR mice that models autism. In this study, we show that expression of Ras protein was significantly elevated in frontal cortex and cerebellum of BTBR mice as compared with B6 mice. The phosphorylations of A-Raf, B-Raf and C-Raf were all significantly increased in frontal cortex of BTBR mice. However, only C-Raf phosphorylation was increased in the cerebellum of BTBR mice. In addition, we further detected that the activities of both MEK1/2 and ERK1/2, which are the downstream kinases of Ras/Raf signaling, were significantly enhanced in the frontal cortex. We also detected that ERK1/2 is significantly over-expressed in frontal cortex of autistic subjects. Our results indicate that Ras/Raf/ERK1/2 signaling is upregulated in the frontal cortex of BTBR mice that model autism. These findings, together with the enhanced ERK1/2 expression in autistic frontal cortex, imply that Ras/Raf/ERK1/2 signaling activities could be increased in autistic brain and involved in the pathogenesis of autism.  相似文献   

11.
A number of recent functional Magnetic Resonance Imaging (fMRI) studies on intertemporal choice behavior have demonstrated that so-called emotion- and reward-related brain areas are preferentially activated by decisions involving immediately available (but smaller) rewards as compared to (larger) delayed rewards. This pattern of activation was not seen, however, when intertemporal choices were made for another (unknown) individual, which speaks to that activation having been triggered by self-relatedness. In the present fMRI study, we investigated the brain correlates of individuals who passively observed intertemporal choices being made either for themselves or for an unknown person. We found higher activation within the ventral striatum, medial prefrontal and orbitofrontal cortex, pregenual anterior cingulate cortex, and posterior cingulate cortex when an immediate reward was possible for the observer herself, which is in line with findings from studies in which individuals actively chose immediately available rewards. Additionally, activation in the dorsal anterior cingulate cortex, posterior cingulate cortex, and precuneus was higher for choices that included immediate options than for choices that offered only delayed options, irrespective of who was to be the beneficiary. These results indicate that (1) the activations found in active intertemporal decision making are also present when the same decisions are merely observed, thus supporting the assumption that a robust brain network is engaged in immediate gratification; and (2) with immediate rewards, certain brain areas are activated irrespective of whether the observer or another person is the beneficiary of a decision, suggesting that immediacy plays a more general role for neural activation. An explorative analysis of participants’ brain activation corresponding to chosen rewards, further indicates that activation in the aforementioned brain areas depends on the mere presence, availability, or actual reception of immediate rewards.  相似文献   

12.
Learning by following explicit advice is fundamental for human cultural evolution, yet the neurobiology of adaptive social learning is largely unknown. Here, we used simulations to analyze the adaptive value of social learning mechanisms, computational modeling of behavioral data to describe cognitive mechanisms involved in social learning, and model-based functional magnetic resonance imaging (fMRI) to identify the neurobiological basis of following advice. One-time advice received before learning had a sustained influence on people's learning processes. This was best explained by social learning mechanisms implementing a more positive evaluation of the outcomes from recommended options. Computer simulations showed that this "outcome-bonus" accumulates more rewards than an alternative mechanism implementing higher initial reward expectation for recommended options. fMRI results revealed a neural outcome-bonus signal in the septal area and the left caudate. This neural signal coded rewards in the absence of advice, and crucially, it signaled greater positive rewards for positive and negative feedback after recommended rather than after non-recommended choices. Hence, our results indicate that following advice is intrinsically rewarding. A positive correlation between the model's outcome-bonus parameter and amygdala activity after positive feedback directly relates the computational model to brain activity. These results advance the understanding of social learning by providing a neurobiological account for adaptive learning from advice.  相似文献   

13.
Braver TS  Brown JW 《Neuron》2003,38(2):150-152
Accumulating evidence from nonhuman primates suggests that midbrain dopamine cells code reward prediction errors and that this signal subserves reward learning in dopamine-receiving brain structures. In this issue of Neuron, McClure et al. and O'Doherty et al. use event-related fMRI to provide some of the strongest evidence to date that the reward prediction error model of dopamine system activity applies equally well to human reward learning.  相似文献   

14.
Perceptual decision making in monkeys relies on decision neurons, which accumulate evidence and maintain choices until a response is given. In humans, several brain regions have been proposed to accumulate evidence, but it is unknown if these regions also maintain choices. To test if accumulator regions in humans also maintain decisions we compared delayed and self-paced responses during a face/house discrimination decision making task. Computational modeling and fMRI results revealed dissociated processes of evidence accumulation and decision maintenance, with potential accumulator activations found in the dorsomedial prefrontal cortex, right inferior frontal gyrus and bilateral insula. Potential maintenance activation spanned the frontal pole, temporal gyri, precuneus and the lateral occipital and frontal orbital cortices. Results of a quantitative reverse inference meta-analysis performed to differentiate the functions associated with the identified regions did not narrow down potential accumulation regions, but suggested that response-maintenance might rely on a verbalization of the response.  相似文献   

15.
The high density of the steroid hormone receptors in the structures of temporal lobe involved in learning and memory, such as the hippocampus, perirhinal cortex, entorhinal cortex and amigdaloid complex, shows that there must be a direct relationship between gonadal hormones and organizational effects of steroid hormones in those structures during development of the nervous system. The present study was undertaken in order to investigate the effect of testosterone administration during the third week of gestation on the spatial memory formation of the offspring rats and the level of soluble proteins in the temporal lobe and frontal lobe of brain, as evidence of important organizational effects of androgens during prenatal development in brain sexual dimorphism. Animals have received testosterone undecanoate on days 14, 15, 16 and 19, 20, 21 of gestation. Learning and memory tests were started 100 days after the testosterone treatment. At the end of the experiments, the temporal and frontal lobes of brain were removed for assessing the level of soluble proteins. Testosterone treatment significantly improved spontaneous alternations percentage of male offspring in Y-maze task comparative with female offspring and reference memory in radial 8 arm-maze task (decreasing in number of reference memory errors in both male and female offspring groups), suggesting effects of both short and long-term memory. Also, testosterone significantly increased the brain soluble protein level of treated female rats in 14–16 prenatal days compared with the control group as well as the brain soluble protein level of treated male rats. These results suggest that steroid hormones play an important role in the spatial learning and memory formation by means of protein synthesis in different lobes of the brain.  相似文献   

16.
An extensive neuroimaging literature has helped characterize the brain regions involved in navigating a spatial environment. Far less is known, however, about the brain networks involved when learning a spatial layout from a cartographic map. To compare the two means of acquiring a spatial representation, participants learned spatial environments either by directly navigating them or learning them from an aerial-view map. While undergoing functional magnetic resonance imaging (fMRI), participants then performed two different tasks to assess knowledge of the spatial environment: a scene and orientation dependent perceptual (SOP) pointing task and a judgment of relative direction (JRD) of landmarks pointing task. We found three brain regions showing significant effects of route vs. map learning during the two tasks. Parahippocampal and retrosplenial cortex showed greater activation following route compared to map learning during the JRD but not SOP task while inferior frontal gyrus showed greater activation following map compared to route learning during the SOP but not JRD task. We interpret our results to suggest that parahippocampal and retrosplenial cortex were involved in translating scene and orientation dependent coordinate information acquired during route learning to a landmark-referenced representation while inferior frontal gyrus played a role in converting primarily landmark-referenced coordinates acquired during map learning to a scene and orientation dependent coordinate system. Together, our results provide novel insight into the different brain networks underlying spatial representations formed during navigation vs. cartographic map learning and provide additional constraints on theoretical models of the neural basis of human spatial representation.  相似文献   

17.
In the field of the neurobiology of learning, significant emphasis has been placed on understanding neural plasticity within a single structure (or synapse type) as it relates to a particular type of learning mediated by a particular brain area. To appreciate fully the breadth of the plasticity responsible for complex learning phenomena, it is imperative that we also examine the neural mechanisms of the behavioral instantiation of learned information, how motivational systems interact, and how past memories affect the learning process. To address this issue, we describe a model of complex learning (rodent adaptive navigation) that could be used to study dynamically interactive neural systems. Adaptive navigation depends on the efficient integration of external and internal sensory information with motivational systems to arrive at the most effective cognitive and/or behavioral strategies. We present evidence consistent with the view that during navigation: 1) the limbic thalamus and limbic cortex is primarily responsible for the integration of current and expected sensory information, 2) the hippocampal-septal-hypothalamic system provides a mechanism whereby motivational perspectives bias sensory processing, and 3) the amygdala-prefrontal-striatal circuit allows animals to evaluate the expected reinforcement consequences of context-dependent behavioral responses. Although much remains to be determined regarding the nature of the interactions among neural systems, new insights have emerged regarding the mechanisms that underlie flexible and adaptive behavioral responses.  相似文献   

18.
A neural network model of how dopamine and prefrontal cortex activity guides short- and long-term information processing within the cortico-striatal circuits during reward-related learning of approach behavior is proposed. The model predicts two types of reward-related neuronal responses generated during learning: (1) cell activity signaling errors in the prediction of the expected time of reward delivery and (2) neural activations coding for errors in the prediction of the amount and type of reward or stimulus expectancies. The former type of signal is consistent with the responses of dopaminergic neurons, while the latter signal is consistent with reward expectancy responses reported in the prefrontal cortex. It is shown that a neural network architecture that satisfies the design principles of the adaptive resonance theory of Carpenter and Grossberg (1987) can account for the dopamine responses to novelty, generalization, and discrimination of appetitive and aversive stimuli. These hypotheses are scrutinized via simulations of the model in relation to the delivery of free food outside a task, the timed contingent delivery of appetitive and aversive stimuli, and an asymmetric, instructed delay response task.  相似文献   

19.
20.
Reward-guided decision-making and learning depends on distributed neural circuits with many components. Here we focus on recent evidence that suggests four frontal lobe regions make distinct contributions to reward-guided learning and decision-making: the lateral orbitofrontal cortex, the ventromedial prefrontal cortex and adjacent medial orbitofrontal cortex, anterior cingulate cortex, and the anterior lateral prefrontal cortex. We attempt to identify common themes in experiments with human participants and with animal models, which suggest roles that the areas play in learning about reward associations, selecting reward goals, choosing actions to obtain reward, and monitoring the potential value of switching to alternative courses of action.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号