首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Although animals typically prefer to exert less effort rather than more effort to obtain food, the present research shows that requiring greater effort to obtain food at a particular location appears to increase the value of that location. In Experiment 1, pigeons' initial preference for one feeder was significantly reduced by requiring 1 peck to obtain food from that feeder and requiring 30 pecks to obtain food from the other feeder. In Experiment 2, a similar decrease in preference was not found when pigeons received reinforcement from both feeders independently of the amount of effort required. These results are consistent with the within-trial contrast effect proposed by in which the relative hedonic value of a reward depends on the state of the animal immediately prior to the reward. The greater the improvement from that prior state the greater the value of the reinforcer.  相似文献   

2.
There is evidence that pigeons prefer conditioned reinforcers that are preceded by greater effort over those that are preceded by less effort (an effect that has been attributed to within-trial contrast). In past research the probability of reinforcement for correct choice of the conditioned reinforcer has been 100%, however, the high level of reinforcement for both alternatives in training may result in a performance ceiling when choice between those alternatives is provided on test trials. In the present study we tested this hypothesis by including a group for which the probability of reinforcement in training was only 50%. Pigeons were trained on two simultaneous discriminations, one that was preceded by a 30 peck requirement the other by a single peck requirement. On test trials, we found a significant preference for the S+ that required the greater effort in training for pigeons trained with 100% and a small but nonsignificant effect for pigeons trained with 50% reinforcement. Although the hypothesis that the within-trial contrast effect was constrained by a performance ceiling was not confirmed, we did find a reliable within-trial contrast effect with 100% reinforcement.  相似文献   

3.
A common feature of reinforcer devaluation studies is that new learning induces the devaluation. The present study used extinction to induce new learning about the conditioned reinforcer in a heterogeneous chain schedule. Rats pressed a lever in a heterogeneous chain schedule to produce a conditioned reinforcer (light) associated with the opportunity to obtain an unconditioned reinforcer (food) by pulling a chain. The density of food reinforcement correlated with the conditioned reinforcer was varied in a comparison of continuous and variable-ratio reinforcement schedules of chain pulling; this had no noticeable effect on conditioned reinforcer devaluation produced by extinction of chain pulling. In contrast, how rats were deprived appeared to matter very much. Restricting meal duration to 1h daily produced more lever pressing during baseline training and a greater reductive effect of devaluation on lever pressing than restricting body weight to 80% of a control rat's weight, which eliminated the devaluation effect. Further analysis suggested that meal-duration restriction may have produced devaluation effects because it was more effective than weight restriction in reducing rats' body weights. Our results exposed an important limitation on the devaluation of conditioned reinforcers: slight differences in food restriction, using two commonly employed food-restriction procedures, can produce completely different interpretations of reinforcer devaluation while leaving reinforcer-based learning intact.  相似文献   

4.
Nicotine has been found to produce dose-dependent increases in impulsive choice (preference for smaller, sooner reinforcers relative to larger, later reinforcers) in rats. Such increases could be produced by either of two behavioral mechanisms: (1) an increase in delay discounting (i.e., exacerbating the impact of differences in reinforcer delays) which would increase the value of a sooner reinforcer relative to a later one, or (2) a decrease in magnitude sensitivity (i.e., diminishing the impact of differences in reinforcer magnitudes) which would increase the value of a smaller reinforcer relative to a larger one. To isolate which of these two behavioral mechanisms was likely responsible for nicotine's effect on impulsive choice, we manipulated reinforcer delay and magnitude using a concurrent, variable interval (VI 30 s, VI 30 s) schedule of reinforcement with 2 groups of Long-Evans rats (n = 6 per group). For one group, choices were made between a 1-s delay and a 9-s delay to 2 food pellets. For a second group, choices were made between 1 pellet and 3 pellets. Nicotine (vehicle, 0.03, 0.1, 0.3, 0.56 and 0.74 mg/kg) produced dose-dependent decreases in preference for large versus small magnitude reinforcers and had no consistent effect on preference for short versus long delays. This suggests that nicotine decreases sensitivity to reinforcer magnitude.  相似文献   

5.
In previous research on resistance to change, differential disruption of operant behavior by satiation has been used to assess the relative strength of responding maintained by different rates or magnitudes of the same reinforcer in different stimulus contexts. The present experiment examined resistance to disruption by satiation of one reinforcer type when qualitatively different reinforcers were arranged in different contexts. Rats earned either food pellets or a 15% sucrose solution on variable-interval 60-s schedules of reinforcement in the two components of a multiple schedule. Resistance to satiation was assessed by providing free access either to food pellets or the sucrose solution prior to or during sessions. Responding systematically decreased more relative to baseline in the component associated with the satiated reinforcer. These findings suggest that when qualitatively different reinforcers maintain responding, relative resistance to change depends upon the relations between reinforcers and disrupter types.  相似文献   

6.
The idea that dopamine mediates the reinforcing effects of stimuli persists in the field of neurosciences. The present study shows that haloperidol, a dopamine antagonist, does not eliminate the reinforcing value of food reinforcers. The ratio of reinforcers changed seven times across two levers within sessions, modeling a dynamic environment. The magnitude of the reinforcer was manipulated independently of the reinforcer ratio. Four doses of intraperitoneal haloperidol were assessed over periods of 12 daily sessions. Haloperidol did not impair the discrimination that the rats established between rich and lean levers; the response distributions favored the lever associated with the higher probability of reinforcement and the larger pellets. The parameters of the generalized matching law (bias and sensitivity) were used to estimate effects of haloperidol upon the motor system and upon the rats' motivation for food reinforcers.  相似文献   

7.
Four pigeons responded under a 7-component mixed schedule in which each component arranged a different left:right reinforcer ratio (27:1, 9:1, 3:1, 1:1, 1:3, 1:9, 1:27). Components were unsignaled, and the order within each session was randomly determined. After extensive exposure to these contingencies, effects of a range of doses of d-amphetamine (0.3-5.6 mg/kg) on estimates of sensitivity to reinforcement at several levels of analysis were assessed. Under non-drug conditions, the structure of choice was similar to that previously reported under this procedure. That is, responding adjusted within components to the reinforcer ratio in effect (i.e., sensitivity estimates were higher in the 2nd than in the 1st half of components), and individual reinforcers produced “preference pulses” (i.e., each food presentation produced an immediate, local, shift in preference toward the response that just produced food). Although there was a general tendency for d-amphetamine to reduce overall sensitivity to reinforcement, the size of this effect and its reliability varied across pigeons. Further analysis, however, revealed that intermediate d-amphetamine doses consistently reduced sensitivity immediately following reinforcer presentations; that is, these doses consistently attenuated preference pulses.  相似文献   

8.
Pausing within multiple fixed-ratio schedules differing in reinforcer magnitude is jointly controlled by both past and upcoming conditions of reinforcement. Abrupt shifts from a just-received large reinforcer to a signaled upcoming small reinforcer (i.e., a negative incentive shift) produce marked disruptions in responding, as indexed by extended pausing. The purpose of this experiment was to determine if reducing the level of food deprivation via prefeeding enhanced these disruptive effects. Five Long Evans rats lever-pressed according to a fixed-ratio schedule. Half of the components ended in a relatively large reinforcer (three 45-mg food pellets) and half ended in a relatively small reinforcer (one pellet). Components alternated irregularly, yielding four transitions between reinforcers: small-small, small-large, large-small (the negative incentive shift), and large-large. During five, 1-session prefeeding probes, rats were given 12 g of food in their home cages 1 h prior to the start of the session. Under steady-state conditions, negative incentive shifts engendered the longest pausing. Prefeeding produced large absolute and relative increases in pausing during negative incentive shifts, and small increases in pausing in the other transitions. The results are interpreted within a resistance to change framework.  相似文献   

9.
Davison and Baum [Davison, M., Baum, W. M., 2000. Choice in a variable environment: every reinforcer counts. Journal of the Experimental Analysis of Behavior 74, 1-24.] developed a concurrent-schedule procedure where, within each session, different reinforcer ratios were arranged across components separated by brief black-outs. Behaviour adapted quickly to the reinforcer ratios and reinforcers also had local effects on responding. This procedure has been used with pigeons and rats. In the present experiment, we adapted the Davison and Baum procedure to study the effects of reinforcement on human choice behaviour. Eighteen participants were presented with four different reinforcer ratios within a single 50-minute session. Mean sensitivity to the reinforcer ratios increased within components, and preference was greater for the just-reinforced response alternative immediately following reinforcer delivery, similar to the results from non-human experiments. Although there were limitations to the current procedure, the local time scale analyses are a novel way of examining human operant behaviour.  相似文献   

10.
Obese Zucker rats (fa/fa) eat more food than lean controls in free-feeding conditions, which strongly influences their phenotypic expression. Few studies, however, characterize their food consumption in environments that are more representative of foraging conditions, e.g., how effort plays a role in food procurement. This study examined the reinforcing efficacy of sucrose in obese Zucker rats by varying the responses required to obtain single sucrose pellets. Male Zucker rats (15 lean, 14 obese) lever-pressed under eight fixed ratio (FR) schedules of sucrose reinforcement, in which the number of lever-presses required to gain access to a single sucrose pellet varied from 1 to 300. Linear and exponential demand equations, which characterize the value of a reinforcer by its sensitivity to price (FR), were fit to the number of food reinforcers and responses made. Free food consumption was also examined. Obese Zuckers, compared to leans, consumed more food under free-feeding conditions. Moreover, they had higher levels of consumption and response output, but only at low FR values. Both groups were equally sensitive to price increases at higher FR values. This suggests that environmental conditions may interact with genes in the expression of food reinforcer efficacy.  相似文献   

11.
In the metaphor of behavioral momentum, reinforcement is assumed to strengthen discriminated operant behavior in the sense of increasing its resistance to disruption, and extinction is viewed as disruption by contingency termination and reinforcer omission. In multiple schedules of intermittent reinforcement, resistance to extinction is an increasing function of reinforcer rate, consistent with a model based on the momentum metaphor. The partial-reinforcement extinction effect, which opposes the effects of reinforcer rate, can be explained by the large disruptive effect of terminating continuous reinforcement despite its strengthening effect during training. Inclusion of a term for the context of reinforcement during training allows the model to account for a wide range of multiple-schedule extinction data and makes contact with other formulations. The relation between resistance to extinction and reinforcer rate on single schedules of intermittent reinforcement is exactly opposite to that for multiple schedules over the same range of reinforcer rates; however, the momentum model can give an account of resistance to extinction in single as well as multiple schedules. An alternative analysis based on the number of reinforcers omitted to an extinction criterion supports the conclusion that response strength is an increasing function of reinforcer rate during training.  相似文献   

12.
Six pigeons responded on a four-key concurrent variable-interval schedule in which a 27:9:3:1 distribution of reinforcers between the keys changed every 10 reinforcers. Their behaviour quickly came under the control of this changing four-way reinforcer ratio. However, preference between a pair of keys depended not only on the relative reinforcer rates on those keys, but also on the absolute levels of those rates. This contradicts the constant-ratio rule that underpins the matching approach to choice, but is predicted by a contingency-discriminability model that assumes that organisms may occasionally misattribute reinforcers to a response that did not produce them. Reinforcers produced strong preference pulses, or transient increases in responding on the just-reinforced key. Despite accurate tracking of the reinforcer ratio, reinforcers obtained late in components and from leaner keys still produced strong pulses, suggesting both extended and local control of behaviour. Patterns of switching between keys were graded and similarly controlled by the reinforcer rates on each key. Whether considered in terms of switching, local preference pulses, or extended preference, behaviour was controlled by a rapidly changing four-way reinforcer ratio in a graduated, continuous manner that is unlikely to be explained by a simple heuristic such as fix-and-sample.  相似文献   

13.
Key pressing of rats was maintained under multiple and discrete-trial choice schedules with reinforcer units of 45 mg food pellets or 3.5 s dips of sucrose solution. Both smaller and larger fixed ratio (FR) schedules were associated with the same unit price in a manner, for example, that each of eight iterations of FR120 was associated with delivery of a single reinforcer unit and one instance of FR960 was associated with eight reinforcer units. FR requirement varied between 20 and 1560 per aggregate reinforcer and unit price varied between 20 and 240 per reinforcer unit. During multiple schedules with food reinforcers, rates and patterns of responding were comparable over nearly a 50-fold range of FR requirements (20-1380) when unit price was 20; over nearly a six-fold range of FR requirements (120-720) when unit price was 120; and was only marginally maintained when unit price was 240. Demand for food pellets was comparatively inelastic at FRs between 20 and 120, during which subjects did not receive supplemental feeding outside experimental sessions, but was elastic at FRs greater than 240, when subjects sometimes did receive supplemental feeding. In a discrete-trial choice procedure with a constant unit price of 120 for sucrose solution, subjects were indifferent between smaller FRs and alternative FRs as large as 480, but began switching away from larger FRs that were 600 or greater. Because responding had been comparably maintained under both FR120 and FRs as large as 960 in the multiple schedule, results from the choice procedure indicated that choice performance was influenced by variables other than FR requirement and unit price. Because aggregate reinforcers were the same for smaller and larger FRs, the most likely reason for preferring smaller FRs was the nearness in time to some reinforcer.  相似文献   

14.
The present experiment provided a replication in humans of an experimental procedure that has been used frequently with nonhumans to investigate choice behaviour in a changing environment. Six volunteers played a computer game, which required tracking of a moving balloon on two simultaneously available response panels for monetary reinforcers. Each of the 15 sessions randomly arranged the following concurrent variable-interval reinforcement schedules, which were in effect until six reinforcers had been obtained: 27:1, 9:1, 3:1, 1:1, 1:3, 1:9, and 1:27. Although many aspects of human performance appeared to be qualitatively similar to that of nonhumans on this procedure, such as the rapid preference shifts towards the within-session reinforcer ratios and the presence of local effects of reinforcers, values of sensitivity to reinforcement were very variable in the present study, as commonly reported in human choice studies. Future variations and refinements of the experimental methods are needed to explore how this variability may be reduced.  相似文献   

15.
An adjusting-delay procedure was used to study rats' choices with probabilistic and delayed reinforcers, and to compare them with previous results from pigeons. A left lever press led to a 5-s delay signaled by a light and a tone, followed by a food pellet on 50% of the trials. A right lever press led to an adjusting delay signaled by a light followed by a food pellet on 100% of the trials. In some conditions, the light and tone for the probabilistic reinforcer were present only on trials that delivered food. In other conditions, the light and tone were present on all trials that the left lever was chosen. Similar studies with pigeons [Mazur, J.E., 1989. Theories of probabilistic reinforcement. J. Exp. Anal. Behav. 51, 87-99; Mazur, J.E., 1991. Conditioned reinforcement and choice with delayed and uncertain primary reinforcers. J. Exp. Anal. Behav. 63, 139-150] found that choice of the probabilistic reinforcer increased dramatically when the delay-interval stimuli were omitted on no-food trials, but this study found no such effect with the rats. In other conditions, the probability of food was varied, and comparisons to previous studies with pigeons indicated that rats showed greater sensitivity to decreasing reinforcer probabilities. The results support the hypothesis that rats' choices in these situations depend on the total time between a choice response and a reinforcer, whereas pigeons' choices are strongly influenced by the presence of delay-interval stimuli.  相似文献   

16.
Little is known about the effect that procedural variables have on risk-sensitive preference. This study assessed the effect of procedural variables on pigeons' choice between a fixed and variable amount of reinforcement (amount risk) and, in a separate condition, between a fixed and variable delay until reinforcement (delay risk). Experiment 1 investigated the impact of water reinforcement and risk dimension when pigeons were in a restrictive budget, where access to water was less than that necessary to maintain current body weight, and a condition where the pigeons had ample access to water. Pigeons exhibited a greater tendency to prefer the variable alternative for delay risk than for amount risk in both restrictive and ample budgets. Varying water budget had no effect on risk preference. Experiment 2 investigated the influence of water reinforcer location while in a restrictive budget, in which reinforcers were delivered to a single location, two distinct locations, or a randomly selected location. With amount risk, pigeons were risk averse when reinforcers were delivered in separate or random locations and were indifferent to risk when delivered to a single location. With delay risk, pigeons were generally risk prone with no effect from reinforcement location. The finding that pigeons were risk averse when reinforcers were delivered to separate locations and were indifferent to risk when delivered to a single location offers a methodological explanation to the inconsistent findings in the literature with amount risk.  相似文献   

17.
The term "sensory reinforcer" has been used to refer to sensory stimuli (e.g. light onset) that are primary reinforcers in order to differentiate them from other more biologically important primary reinforcers (e.g. food and water). Acquisition of snout poke responding for a visual stimulus (5s light onset) with fixed ratio 1 (FR 1), variable-interval 1min (VI 1min), or variable-interval 6min (VI 6min) schedules of reinforcement was tested in three groups of rats (n=8/group). The VI 6min schedule of reinforcement produced a higher response rate than the FR 1 or VI 1min schedules of visual stimulus reinforcement. One explanation for greater responding on the VI 6min schedule relative to the FR 1 and VI 1min schedules is that the reinforcing effectiveness of light onset habituated more rapidly in the FR 1 and VI 1min groups as compared to the VI 6min group. The inverse relationship between response rate and the rate of visual stimulus reinforcement is opposite to results from studies with biologically important reinforcers which indicate a positive relationship between response and reinforcement rate. Rapid habituation of reinforcing effectiveness may be a fundamental characteristic of sensory reinforcers that differentiates them from biologically important reinforcers, which are required to maintain homeostatic balance.  相似文献   

18.
Four pigeons and three ringneck doves responded on an operant simulation of natural foraging. After satisfying a schedule of reinforcement associated with search time, subjects could "accept" or "reject" another schedule of reinforcement associated with handling time. Two schedules of reinforcement were available, a variable interval, and a fixed interval with the same mean value. Food available in the session (a variable related to the energy budget) was manipulated in the different conditions either by increases of the value of the search state schedule of reinforcement, or by increases in the mean value of the handling state schedules. The results indicate that the amount of food available in the session did not affect the preference for variable schedules of reinforcement, as would be predicted by an influential theory of risk sensitive foraging. Instead, the preference for variability depended on the relationship between the time spent in the search and the handling states, as is predicted by a family of models of choice that are based on the temporal proximity to the reinforcer.  相似文献   

19.
Behavioral momentum theory suggests that the relation between a response and a reinforcer (i.e., response-reinforcer relation) governs response rates and the relation between a stimulus and a reinforcer (i.e., stimulus-reinforcer relation) governs resistance to change. The present experiments compared the effects degrading response-reinforcer relations with response-independent or delayed reinforcers on resistance to change in conditions with equal stimulus-reinforcer relations. In Experiment 1, pigeons responded on equal variable-interval schedules of immediate reinforcement in three components of a multiple schedule. Additional response-independent reinforcers were available in one component and additional delayed reinforcers were available in another component. The results showed that resistance to disruption was greater in the components with added reinforcers than without them (i.e., better stimulus-reinforcer relations), but did not differ for the components with added response-independent and delayed reinforcement. In Experiment 2, a component presenting immediate reinforcement alternated with either a component that arranged equal rates of reinforcement with a proportion of those reinforcers being response independent or a component with a proportion of the reinforcers being delayed. Results showed that resistance to disruption tended to be either similar across components or slightly lower when response-reinforcer relations were degraded with either response-independent or delayed reinforcers. These findings suggest that degrading response-reinforcer relations can impact resistance to change, but that impact does not depend on the specific method and is small relative to the effects of the stimulus-reinforcer relation.  相似文献   

20.
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号