首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The development of a secondary reinforcer as a result of associating a neutral stimulus (buzzer) with intravenous (IV) doses of morpine was studied in rats. Secondary reinforcement developed in the absence of physical dependence and followed the association of the stimulus with either response-contingent or non-contingent injections of morphine. Strength of the conditioned reinforcer, measured in terms of responding on a lever for the stimulus plus infusion of saline solution, was proportional to the unit dosage of morphine employed in pairings of buzzer and drug. When extinction of the lever-press response for IV morphine was conducted (by substituting saline for morphine solution) in the absence of the conditioned reinforcing stimulus, it was seen later that the stimulus could still elicit lever responses, until it too had been present for a sufficient interval of non-reinforced responding. Similarly, extinction of the response for morphine by blocking its action with naloxone in the absence of the stimulus did not eliminate the conditioned reinforcement. Another study showed that a passive, subcutaneous (SC) dose of morphine served to maintain lever-pressing on a contingency of buzzer plus saline infusion. Furthermore, the stimuli resulting from the presence of morphine (after a SC injection) were able to reinstate the lever-responding with only the buzzer-saline contingency when such responses had previously been extinguished. Moreover, it was shown that d-amphetamine could restore responding under the same conditions, and that morphine could also do so for rats in which the primary reinforcer had been d-amphetamine. It is suggested that animal data such as these show that procedures designed for the elimination of human drug-taking behavior must take into account secondary reinforcers as well as the primary reinforcer(s).  相似文献   

2.
A common feature of reinforcer devaluation studies is that new learning induces the devaluation. The present study used extinction to induce new learning about the conditioned reinforcer in a heterogeneous chain schedule. Rats pressed a lever in a heterogeneous chain schedule to produce a conditioned reinforcer (light) associated with the opportunity to obtain an unconditioned reinforcer (food) by pulling a chain. The density of food reinforcement correlated with the conditioned reinforcer was varied in a comparison of continuous and variable-ratio reinforcement schedules of chain pulling; this had no noticeable effect on conditioned reinforcer devaluation produced by extinction of chain pulling. In contrast, how rats were deprived appeared to matter very much. Restricting meal duration to 1h daily produced more lever pressing during baseline training and a greater reductive effect of devaluation on lever pressing than restricting body weight to 80% of a control rat's weight, which eliminated the devaluation effect. Further analysis suggested that meal-duration restriction may have produced devaluation effects because it was more effective than weight restriction in reducing rats' body weights. Our results exposed an important limitation on the devaluation of conditioned reinforcers: slight differences in food restriction, using two commonly employed food-restriction procedures, can produce completely different interpretations of reinforcer devaluation while leaving reinforcer-based learning intact.  相似文献   

3.
This study investigated generalization decrement during an extinction resistance-to-change test for pigeon key pecking using a two-component multiple schedule with equal variable-interval 3-min schedules and different reinforcer amounts (one component presented 2-s access to reinforcement and the other 8s). After establishing baseline responding, subjects were assigned to one of the two extinction conditions: hopper stimuli (hopper and hopper light were activated but no food was available) or Control (inactive hopper and hopper light). Responding in the 8-s component was more resistant to extinction than responding in the 2-s component, the hopper stimuli group was more resistant to extinction compared to the Control group, and an interaction between amount of reinforcement, extinction condition, and session block was present. This finding supports generalization decrement as a factor that influences resistance to extinction. Hopper-time data (the amount of time subjects spent with their heads in the hopper) were compared to resistance-to-change data in an investigation of the role of conditioned reinforcement on resistance to change.  相似文献   

4.
This experiment replicated previous demonstrations that interposing a brief stimulus between reinforced responses and the presentation of the reinforcer reduces responding maintained by intermittent reinforcement schedules. Furthermore, we could find no significant difference between the relative size of the reduction during training on ratio and interval schedules when the predictive significance of the response and stimulus was controlled by a yoking procedure.  相似文献   

5.
The effect of stimulus contiguity and response contingency on responding in chain schedules was examined in two experiments. In Experiment 1, four pigeons were trained on two simple three-link chain schedules that alternated within sessions. Initial links were correlated with a variable-interval 30s schedule, and middle and terminal links were correlated with interdependent variable-interval 30s variable-interval 30s schedules. The combined duration of the interdependent schedules summed to 60s. The two chains differed with respect to signaling of the schedule components: a two-stimulus chain had one stimulus paired with the initial link and one stimulus paired with both the middle and the terminal link, while a three-stimulus chain had a different stimulus paired with the each of the three links. The results showed that the two-stimulus chain maintained lower initial-link responding than the three-stimulus chain. In Experiment 2, four pigeons were exposed to three separate conditions, the two- and three-stimulus chains of Experiment 1 and a three-stimulus chain that had a 3s delay to terminal-link entry from the middle-link response that produced it. The two-stimulus chain maintained lower initial-link responding than the three-stimulus chain, as in Experiment 1, and a similar initial-link responding was maintained by the two-stimulus chain and the three-stimulus chain with the delay contingency. The results demonstrate that a stimulus noncontiguous with food can maintain responding that is sometimes greater than a stimulus contiguous with food, depending on the response contingency for terminal-link entry. The results are contrary to the pairing hypothesis of conditioned reinforcement.  相似文献   

6.
Four experiments were conducted to examine appetitive backward conditioning in a conditioned reinforcement preparation. In all experiments, off-line classical conditioning was conducted following lever-press training on two levers. Presentations of a sucrose solution by a liquid dipper served as an unconditioned stimulus (US) and two auditory stimuli served as conditioned stimuli (CSs); one was paired with the US in either a forward (Experiment 1a) or a backward (Experiments 1b, 2, and 3) relationship, and the other served as a control CS, which was not paired with the US. In testing, each lever-press response produced a presentation of one of the CSs instead of appetitive reinforcers. The response to a lever was facilitated, compared to the response to another lever, when the response produced the backward CS presentation as well as when it produced the forward CS presentation; that is, the backward CS served as an excitatory conditioned reinforcer.  相似文献   

7.
The term "sensory reinforcer" has been used to refer to sensory stimuli (e.g. light onset) that are primary reinforcers in order to differentiate them from other more biologically important primary reinforcers (e.g. food and water). Acquisition of snout poke responding for a visual stimulus (5s light onset) with fixed ratio 1 (FR 1), variable-interval 1min (VI 1min), or variable-interval 6min (VI 6min) schedules of reinforcement was tested in three groups of rats (n=8/group). The VI 6min schedule of reinforcement produced a higher response rate than the FR 1 or VI 1min schedules of visual stimulus reinforcement. One explanation for greater responding on the VI 6min schedule relative to the FR 1 and VI 1min schedules is that the reinforcing effectiveness of light onset habituated more rapidly in the FR 1 and VI 1min groups as compared to the VI 6min group. The inverse relationship between response rate and the rate of visual stimulus reinforcement is opposite to results from studies with biologically important reinforcers which indicate a positive relationship between response and reinforcement rate. Rapid habituation of reinforcing effectiveness may be a fundamental characteristic of sensory reinforcers that differentiates them from biologically important reinforcers, which are required to maintain homeostatic balance.  相似文献   

8.
IN DISCRETE TRIALS, PIGEONS WERE PRESENTED WITH TWO ALTERNATIVES: to wait for a larger reinforcer, or to respond and obtain a smaller reinforcer immediately. The choice of the former was defined as self-control, and the choice of the latter as impulsiveness. The stimulus that set the opportunity for an impulsive choice was presented after a set interval from the onset of the stimulus that signaled the waiting period. That interval increased or decreased from session to session so that the opportunity for an impulsive choice became available either more removed from or closer in time to the presentation of the larger reinforcer. In three separate conditions, the larger reinforcer was delivered according to either a fixed interval (FI) schedule, a fixed time (FT) schedule, or a differential reinforcement of other behavior (DRO) schedule. The results showed that impulsive choices increased as the opportunity for such a choice was more distant in time from presentation of the larger reinforcer. Although the schedule of the larger reinforcer affected the rate of response in the waiting period, the responses themselves had no effect on choice unless the responses postponed presentation of the larger reinforcer.  相似文献   

9.
The effect of signals on resistance to change was evaluated using pigeons responding on a three-component multiple schedule. Each component contained a variable-interval initial link followed by a fixed-time terminal link. One component was an unsignaled-delay schedule, and two were equivalent signaled-delay schedules. After baseline training, resistance to change was assessed through (a) extinction and (b) adding free food to the intercomponent interval. During these tests, the signal stimulus from one of the signaled-delay components (SIG-T) was replaced with the initial-link stimulus from that component, converting it to an unsignaled-delay schedule. That signal stimulus was added to the delay period of the unsignaled-delay component (UNS), converting it to a signaled-delay schedule. The remaining signaled component remained unchanged (SIG-C). Resistance-to-change tests showed removing the signal had a minimal effect on resistance to change in the SIG-T component compared to the unchanged SIG-C component except for one block during free-food testing. Adding the signal to the UNS component significantly increased response rates suggesting that component had low response strength. Interestingly, the direction of the effect was in the opposite direction from what is typically observed. Results are consistent with the conclusion that the signal functioned as a conditioned reinforcer and inconsistent with a generalization-decrement explanation.  相似文献   

10.
In Skinner's Reflex Reserve theory, reinforced responses added to a reserve depleted by responding. It could not handle the finding that partial reinforcement generated more responding than continuous reinforcement, but it would have worked if its growth had depended not just on the last response but also on earlier responses preceding a reinforcer, each weighted by delay. In that case, partial reinforcement generates steady states in which reserve decrements produced by responding balance increments produced when reinforcers follow responding. A computer simulation arranged schedules for responses produced with probabilities proportional to reserve size. Each response subtracted a fixed amount from the reserve and added an amount weighted by the reciprocal of the time to the next reinforcer. Simulated cumulative records and quantitative data for extinction, random-ratio, random-interval, and other schedules were consistent with those of real performances, including some effects of history. The model also simulated rapid performance transitions with changed contingencies that did not depend on molar variables or on differential reinforcement of inter-response times. The simulation can be extended to inhomogeneous contingencies by way of continua of reserves arrayed along response and time dimensions, and to concurrent performances and stimulus control by way of different reserves created for different response classes.  相似文献   

11.
Research on Herrnstein's single-schedule equation contains conflicting findings; some laboratories report variations in the k parameter with reinforcer value, and others report constancy. The reported variation in k typically occurs across very low reinforcer values, and constancy applies across higher values. Here, simulations were conducted assuming a wide range of reinforcer values, and the parameters of Herrnstein's equation were estimated for simulated responding. In the simulations, responses controlled by current reinforcement contingencies were added to other responses ('noise'), controlled by the experimental environment and by contingencies in effect at other times. Expected reinforcer rates were calculated by entering simulated responding into a reinforcement feedback function. These were then fitted using Herrnstein's hyperbola, and the sampling distributions of the two fitted parameters were studied. Both k and Re were underestimated by curve fitting when low-deprivation or reinforcer-quality conditions were simulated. Further simulations showed that k and Re were increasingly underestimated as the assumed noise level was increased, particularly when low-deprivation or reinforcer quality was assumed. It is concluded that reported variations in k from single schedules should not be taken to indicate that the asymptotic rate of responding depends on reinforcement parameters.  相似文献   

12.
The present experiment examined whether habituation contributes to within-session decreases in operant responding for water reinforcers. The experiment asked if this responding can be dishabituated, a fundamental property of habituated behavior. During baseline, rats’ lever pressing was reinforced by water on a variable interval 15-s schedule. During experimental conditions, rats responded on the same schedule and a new stimulus was introduced for 5 min at 15, 30 or 45 min into the 60-min session. The new stimulus was extinction, continuous reinforcement or flashing lights in different conditions. Rate of responding primarily decreased within the session during baseline. Introducing a new stimulus sometimes suppressed (extinction, continuous reinforcement) and sometimes increased (flashing lights) responding while it was in effect. The new stimulus increased responding after it ended and before it was presented in the session. The results are incompatible with the idea that non-habituation satiety factors (e.g., cellular hydration and blood volume) contributed to within-session changes in responding. These satiety factors should increase with increases in consumption, decrease with decreases in consumption and remain constant with constant consumption of water. Nevertheless, all stimulus changes increased operant responding for water. These results support the idea that habituation contributes to within-session decreases in responding for water reinforcers.  相似文献   

13.
Behavioral momentum theory suggests that the relation between a response and a reinforcer (i.e., response-reinforcer relation) governs response rates and the relation between a stimulus and a reinforcer (i.e., stimulus-reinforcer relation) governs resistance to change. The present experiments compared the effects degrading response-reinforcer relations with response-independent or delayed reinforcers on resistance to change in conditions with equal stimulus-reinforcer relations. In Experiment 1, pigeons responded on equal variable-interval schedules of immediate reinforcement in three components of a multiple schedule. Additional response-independent reinforcers were available in one component and additional delayed reinforcers were available in another component. The results showed that resistance to disruption was greater in the components with added reinforcers than without them (i.e., better stimulus-reinforcer relations), but did not differ for the components with added response-independent and delayed reinforcement. In Experiment 2, a component presenting immediate reinforcement alternated with either a component that arranged equal rates of reinforcement with a proportion of those reinforcers being response independent or a component with a proportion of the reinforcers being delayed. Results showed that resistance to disruption tended to be either similar across components or slightly lower when response-reinforcer relations were degraded with either response-independent or delayed reinforcers. These findings suggest that degrading response-reinforcer relations can impact resistance to change, but that impact does not depend on the specific method and is small relative to the effects of the stimulus-reinforcer relation.  相似文献   

14.
Three experiments were conducted using a conditioned taste aversion procedure with rats to examine the effect of nonreinforced presentations of a conditioned stimulus (CS) on its ability to compete with a target stimulus for manifest conditioned responding. Two CSs (A and B) were presented in a serial compound and then paired with the unconditioned stimulus. CS A was first paired with the US and then presented without the US (i.e., extinction) prior to reinforced presentation of the AB compound. Experiment 1 showed that A was poor at competing with B for conditioned responding when given conditioning and extinction prior to reinforcement of AB relative to a group that received both A and B for the first time during compound conditioning. That is, an extinguished A stimulus allowed greater manifest acquisition to B. Experiment 2 found that extinction treatment produced a poor CR to the pretrained and extinguished CS itself following compound conditioning. Experiment 3 found that interposing a retention interval after extinction of A and prior to compound conditioning enhanced A's ability to compete with B. The results of these experiments are discussed with regard to different theories of extinction and associative competition.  相似文献   

15.
Four pigeons responded under a 7-component mixed schedule in which each component arranged a different left:right reinforcer ratio (27:1, 9:1, 3:1, 1:1, 1:3, 1:9, 1:27). Components were unsignaled, and the order within each session was randomly determined. After extensive exposure to these contingencies, effects of a range of doses of d-amphetamine (0.3-5.6 mg/kg) on estimates of sensitivity to reinforcement at several levels of analysis were assessed. Under non-drug conditions, the structure of choice was similar to that previously reported under this procedure. That is, responding adjusted within components to the reinforcer ratio in effect (i.e., sensitivity estimates were higher in the 2nd than in the 1st half of components), and individual reinforcers produced “preference pulses” (i.e., each food presentation produced an immediate, local, shift in preference toward the response that just produced food). Although there was a general tendency for d-amphetamine to reduce overall sensitivity to reinforcement, the size of this effect and its reliability varied across pigeons. Further analysis, however, revealed that intermediate d-amphetamine doses consistently reduced sensitivity immediately following reinforcer presentations; that is, these doses consistently attenuated preference pulses.  相似文献   

16.
Two experiments were conducted to investigate punishment via response-contingent removal of conditioned token reinforcers (response cost) with pigeons. In Experiment 1, key pecking was maintained on a two-component multiple second-order schedule of token delivery, with light emitting diodes (LEDs) serving as token reinforcers. In both components, responding produced tokens according to a random-interval 20-s schedule and exchange periods according to a variable-ratio schedule. During exchange periods, each token was exchangeable for 2.5-s access to grain. In one component, responses were conjointly punished according to fixed-ratio schedules of token removal. Response rates in this punishment component decreased to low levels while response rates in the alternate (no-punishment) component were unaffected. Responding was eliminated when it produced neither tokens nor exchange periods (Extinction), but was maintained at moderate levels when it produced tokens in the signaled absence of food reinforcement, suggesting that tokens served as effective conditioned reinforcers. In Experiment 2, the effect of the response-cost punishment contingency was separated from changes in the density of food reinforcement. This was accomplished by yoking either the number of food deliveries per component (Yoked Food) or the temporal placement of all stimulus events (tokens, exchanges, food deliveries) (Yoked Complete), from the punishment to the no-punishment component. Response rates decreased in both components, but decreased more rapidly and were generally maintained at lower levels in the punishment component than in the yoked component. In showing that the response-cost contingency had a suppressive effect on responding in addition to that produced by reductions in reinforcement density, the present results suggest that response-cost punishment shares important features with other forms of punishment.  相似文献   

17.
The present study investigated whether the sucrose-reinforced lever pressing of rats in the first half of a 50-min session would be sensitive to upcoming food-pellet reinforcement in the second half. In Experiment 1, the type of reinforcer in the first half of the session was always liquid sucrose and type of reinforcer in the second half (liquid sucrose or food pellets) varied across conditions. Sucrose concentration varied across groups (1, 5, or 25%). Results showed that rates and patterns of responding for 1%, and sometimes for 5%, sucrose reinforcers in the first half of the session were higher and steeper, respectively, when food-pellet, rather than sucrose, reinforcement occurred in the second half. Responding for 25% sucrose was not similarly affected. Experiment 2 replicated the results of Experiment 1 using a within-subjects design. Although the present results represent induction (i.e. the opposite of contrast), they are consistent with some results on consummatory contrast. They also further demonstrate that responding on interval schedules of reinforcement can be altered prospectively. By doing so, however, they pose potential problems for current theories for why operant response rates change within the session.  相似文献   

18.
The rat’s ability to vary its whisking “strategies” to meet the functional demands of a discriminative task suggests that whisking may be characterized as a “voluntary” behavior—an operant—and like other operants, should be modifiable by appropriate manipulations of response–reinforcer contingencies. To test this hypothesis we have used high-resolution, optoelectronic “real-time” recording procedures to monitor the movements of individual whiskers and reinforce specific movement parameters (amplitude, frequency). In one operant paradigm (N = 9) whisks with protractions above a specified amplitude were reinforced (Variable Interval 30?s) in the presence of a tone, but extinguished (EXT) in its absence. In a second paradigm (N = 3), rats were reinforced on two different VI schedules (VI-20s/VI-120s) signaled, respectively, by the presence or absence of the tone. Selective reinforcement of whisking movements maintained the behavior over many weeks of testing and brought it under stimulus and schedule control. Subjects in the first paradigm learned to increase responding in the presence of the tone and inhibit responding in its absence. In the second paradigm, subjects whisked at significantly different rates in the two stimulus conditions. Bilateral deafferentation of the whisker pad did not impair conditioned whisking or disrupt discrimination behavior. Our results confirm the hypothesis that rodent whisking has many of the properties of an operant response. The ability to bring whisking movement parameters under operant control should facilitate electrophysiological and lesion/behavioral studies of this widely used “model” sensorimotor system.  相似文献   

19.
Partial reinforcement often leads to asymptotically higher rates of responding and number of trials with a response than does continuous reinforcement in pigeon autoshaping. However, comparisons typically involve a partial reinforcement schedule that differs from the continuous reinforcement schedule in both time between reinforced trials and probability of reinforcement. Two experiments examined the relative contributions of these two manipulations to asymptotic response rate. Results suggest that the greater responding previously seen with partial reinforcement is primarily due to differential probability of reinforcement and not differential time between reinforced trials. Further, once established, differences in responding are resistant to a change in stimulus and contingency. Secondary response theories of autoshaped responding (theories that posit additional response-augmenting or response-attenuating mechanisms specific to partial or continuous reinforcement) cannot fully accommodate the current body of data. It is suggested that researchers who study pigeon autoshaping train animals on a common task prior to training them under different conditions.  相似文献   

20.
Rats increase their rate of operant responding for 1% sucrose reinforcement in the first half of an experimental session if a high-valued reinforcer will be available in the second half. Previous research suggests that this induction effect occurs because the reinforcing value of the low-valued substance has increased. The present study investigated whether this increase may occur because of where the substances are delivered. Rats pressed a lever to earn 1% liquid-sucrose reinforcers in the first half of the session. In control conditions, they also pressed for 1% sucrose in the second half. In treatment conditions, they pressed for food-pellet (Experiment 1) or 32% sucrose (Experiment 2) reinforcers in the second half, with these reinforcers either being delivered to the same location as the 1% sucrose or to a different location. Upcoming food-pellet or 32% sucrose reinforcement increased rates of lever pressing for 1% sucrose in the first half of the session, with the largest increase observed when the high-valued reinforcer was delivered to the same location as the 1% sucrose. Qualitatively similar results were found with rates of consumption of 1% sucrose reinforcers in the first half of the session, which were measured in Experiment 2. The location to which reinforcers are delivered appears to be one of the factors that contributes to this induction effect. The present results may therefore identify one of the factors that determine whether differential conditions of reinforcement will lead to contrast or induction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号