首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research on Herrnstein's single-schedule equation contains conflicting findings; some laboratories report variations in the k parameter with reinforcer value, and others report constancy. The reported variation in k typically occurs across very low reinforcer values, and constancy applies across higher values. Here, simulations were conducted assuming a wide range of reinforcer values, and the parameters of Herrnstein's equation were estimated for simulated responding. In the simulations, responses controlled by current reinforcement contingencies were added to other responses ('noise'), controlled by the experimental environment and by contingencies in effect at other times. Expected reinforcer rates were calculated by entering simulated responding into a reinforcement feedback function. These were then fitted using Herrnstein's hyperbola, and the sampling distributions of the two fitted parameters were studied. Both k and Re were underestimated by curve fitting when low-deprivation or reinforcer-quality conditions were simulated. Further simulations showed that k and Re were increasingly underestimated as the assumed noise level was increased, particularly when low-deprivation or reinforcer quality was assumed. It is concluded that reported variations in k from single schedules should not be taken to indicate that the asymptotic rate of responding depends on reinforcement parameters.  相似文献   

2.
This experiment replicated previous demonstrations that interposing a brief stimulus between reinforced responses and the presentation of the reinforcer reduces responding maintained by intermittent reinforcement schedules. Furthermore, we could find no significant difference between the relative size of the reduction during training on ratio and interval schedules when the predictive significance of the response and stimulus was controlled by a yoking procedure.  相似文献   

3.
The experiments tested the idea that changes in habituation to the reinforcer contribute to behavioral interactions during multiple schedules. This idea predicts that changing an aspect of the reinforcer should disrupt habituation and produce an interaction. Pigeons and rats responded on multiple variable interval variable interval schedules. Introducing variability into the duration of reinforcers in one component increased response rates in both components when the schedules provided high, but not low, rates of reinforcement. The increases in constant-component response rates grew larger as the session progressed. Within-session decreases in responding were smaller when the other component provided variable-, rather than fixed-, duration reinforcers. These results are consistent with the idea that changes in habituation to the reinforcer contribute to behavioral interactions. They help to explain why interactions do not occur for some subjects under conditions that produce them for others. Finally, the results question the assumption that induction and behavioral contrast are always produced by different theoretical mechanisms.  相似文献   

4.
Although Killeen's mathematical principles of reinforcement (MPR) apply to the asymptotic rate of a free operant after extended exposure to a single schedule of reinforcement, they can be extended to resistance to change in multiple schedules via alterations in the parameter representing the activating effects of reinforcers. MPR's predictions of resistance to change in relation to reinforcer rate on variable-interval (VI) schedules are empirically correct and agree with behavioral momentum theory (BMT). However, both MPR and BMT encounter problems in accounting for the effects of delayed reinforcement on resistance to change, relative to immediate reinforcement at the same rate. Further problems are raised by differences in resistance to change between variable-ratio (VR) and variable-interval performances maintained by the same reinforcer rate. With both delayed versus immediate reinforcement and variable-ratio versus variable-interval reinforcement, differential resistance to change is negatively correlated with the log ratios of baseline response rates when reinforcer rates are equated. Cases where resistance to change varies despite equated reinforcer rates, and the correlations among behavioral measures, provide challenges and opportunities for both MPR and BMT.  相似文献   

5.
Some of the most frequently used methods in the study of conditioned reinforcement seem to be insufficient to demonstrate the effect. The clearest way to assess this phenomenon is the training of a new response. In the present study, rats were exposed to a situation in which a primary reinforcer and an arbitrary stimulus were paired and subsequently the effect of this arbitrary event was assessed by presenting it following a new response. Subjects under these conditions emitted more responses compared to their own responding before the pairing and to their responding on a similar operandum that was available concurrently that had no programmed consequences. Response rates also were higher compared to responding by subjects in similar conditions in which there was no contingency (a) between the arbitrary stimulus and the reinforcer, (b) between the response and the arbitrary stimulus or (c) both. Results are discussed in terms of necessary and sufficient conditions to study conditioned reinforcement.  相似文献   

6.
Adult human subjects chose between schedules containing stimuli (indicator lights) that the subjects were instructed to consider pleasurable. The schedules differed in amount of reinforcement (period of illumination) or delay (interval between a choice response and light onset). Although subjects preferred large to small amounts of reinforcement, they were essentially indifferent between immediate and delayed reinforcement. In contrast, previous data on video game reinforcement demonstrated preferences for both immediate and large amounts of reinforcement. The instructed reinforcer was thus partially effective in controlling choice but was not equivalent to a reinforcer that presumably had intrinsic value.  相似文献   

7.
The term "sensory reinforcer" has been used to refer to sensory stimuli (e.g. light onset) that are primary reinforcers in order to differentiate them from other more biologically important primary reinforcers (e.g. food and water). Acquisition of snout poke responding for a visual stimulus (5s light onset) with fixed ratio 1 (FR 1), variable-interval 1min (VI 1min), or variable-interval 6min (VI 6min) schedules of reinforcement was tested in three groups of rats (n=8/group). The VI 6min schedule of reinforcement produced a higher response rate than the FR 1 or VI 1min schedules of visual stimulus reinforcement. One explanation for greater responding on the VI 6min schedule relative to the FR 1 and VI 1min schedules is that the reinforcing effectiveness of light onset habituated more rapidly in the FR 1 and VI 1min groups as compared to the VI 6min group. The inverse relationship between response rate and the rate of visual stimulus reinforcement is opposite to results from studies with biologically important reinforcers which indicate a positive relationship between response and reinforcement rate. Rapid habituation of reinforcing effectiveness may be a fundamental characteristic of sensory reinforcers that differentiates them from biologically important reinforcers, which are required to maintain homeostatic balance.  相似文献   

8.
Four pigeons were trained on concurrent variable-interval 30-s schedules. Relative reinforcer amounts arranged across the two alternatives were varied across sessions according to a pseudorandom binary sequence [cf., Hunter, I., Davison, M., 1985. Determination of a behavioral transfer function: white-noise analysis of session-to-session response-ratio dynamics on concurrent VI schedules. J. Exp. Anal. Behav. 43, 43-59]; the ratios (left/right) were either 1/7 or 7/1. Reinforcer amount was manipulated by varying the number of 1.2s hopper presentations. Sessions ended after 30 reinforcers (15 for each alternative). After approximately 30 sessions, response ratios for all pigeons began to track the changes in amount ratio (i.e., subjects' responding showed a moderate increase in sensitivity of responding to reinforcer amount). Characteristics of responding were similar to procedures in which reinforcer rate and immediacy have been manipulated, although sensitivity estimates for amount were lower than those previously obtained with rate and immediacy. This procedure may serve as a useful method for studying the effects of certain environmental manipulations (e.g., drug administration) on sensitivity to reinforcer amount.  相似文献   

9.
In the metaphor of behavioral momentum, reinforcement is assumed to strengthen discriminated operant behavior in the sense of increasing its resistance to disruption, and extinction is viewed as disruption by contingency termination and reinforcer omission. In multiple schedules of intermittent reinforcement, resistance to extinction is an increasing function of reinforcer rate, consistent with a model based on the momentum metaphor. The partial-reinforcement extinction effect, which opposes the effects of reinforcer rate, can be explained by the large disruptive effect of terminating continuous reinforcement despite its strengthening effect during training. Inclusion of a term for the context of reinforcement during training allows the model to account for a wide range of multiple-schedule extinction data and makes contact with other formulations. The relation between resistance to extinction and reinforcer rate on single schedules of intermittent reinforcement is exactly opposite to that for multiple schedules over the same range of reinforcer rates; however, the momentum model can give an account of resistance to extinction in single as well as multiple schedules. An alternative analysis based on the number of reinforcers omitted to an extinction criterion supports the conclusion that response strength is an increasing function of reinforcer rate during training.  相似文献   

10.
Four pigeons responded under a 7-component mixed schedule in which each component arranged a different left:right reinforcer ratio (27:1, 9:1, 3:1, 1:1, 1:3, 1:9, 1:27). Components were unsignaled, and the order within each session was randomly determined. After extensive exposure to these contingencies, effects of a range of doses of d-amphetamine (0.3-5.6 mg/kg) on estimates of sensitivity to reinforcement at several levels of analysis were assessed. Under non-drug conditions, the structure of choice was similar to that previously reported under this procedure. That is, responding adjusted within components to the reinforcer ratio in effect (i.e., sensitivity estimates were higher in the 2nd than in the 1st half of components), and individual reinforcers produced “preference pulses” (i.e., each food presentation produced an immediate, local, shift in preference toward the response that just produced food). Although there was a general tendency for d-amphetamine to reduce overall sensitivity to reinforcement, the size of this effect and its reliability varied across pigeons. Further analysis, however, revealed that intermediate d-amphetamine doses consistently reduced sensitivity immediately following reinforcer presentations; that is, these doses consistently attenuated preference pulses.  相似文献   

11.
Partial reinforcement (PR) effects on animal locomotor behavior were studied in the golden hamster, using food-hoarding activity as a reinforcer. The first experiment demonstrated that hoarding reinforces a running response towards the goal section of a straight-alley runway, and that no such learning occurs when sated hamsters were not allowed to hoard food. However, a second experiment using various partial reinforcement schedules and a continuous reinforcement schedule did not give any evidence for the existence of a partial reinforcement acquisition effect (PRAE). The third experiment confirmed these results with an extended training procedure and showed a slight partial reinforcement extinction effect (PREE) mainly in the first sessions of the extinction phase.  相似文献   

12.
In most response sequences auxiliary responses stop occurring as training increases. Auxiliary responses are precurrent responses that increase the likelihood of reinforcement for subsequent responding, are not required by the programmed contingencies, and occur in situations in which transfer of stimulus control is not prevented. For example, when someone is learning to solve arithmetic problems, some steps, such as writing down intermediate calculations, are skipped as training increases. A paired-associates task was used to investigate the decrease of auxiliary response, in which participants had to learn the second member (arbitrary characters) of each pair upon being presented with the first member (different shapes), and could look up an auxiliary screen (auxiliary response) in order to do so. Task complexity was varied by changing the average programmed frequency of reinforcement for individual responses (Experiment 1) and response sequences (Experiment 3), the programmed probability of reinforcement for responses given a position (PPRPos) with a fixed (Experiment 2) or variable number of associated pairs (Experiment 4), and the programmed probability of reinforcement for responses given a shape with fixed (Experiment 5) or variable (Experiment 6) number of characters per shape. Increases in these variables produced systematic decreases in the duration of auxiliary behavior necessary to learn the task. These results suggest that some aspects of task complexity can be measured based upon the quantification of the programmed contingencies of reinforcement.  相似文献   

13.
The choice responses of four pigeons were examined in 20 periods of transition in a concurrent-chain procedure with variable-interval schedules as initial links and fixed delays to reinforcement as terminal links. In some conditions, the delays to reinforcement were different for the two terminal links, and changes in preference were recorded after the delays for the two response keys were switched. In other conditions, the reinforcer delays were equal for the two keys, but which key delivered 80% of the reinforcers was periodically switched. Choice proportions changed more quickly after a switch in reinforcement percentages than after a switch in the delays, thereby contradicting the hypothesis that faster changes would occur when the switch in conditions was easier to discriminate. Analyses of response sequences showed that the effects of individual reinforcers were larger and lasted longer in conditions with changing reinforcement percentages than in conditions with changing terminal-link delays. Rates of change in choice behavior do not appear to be limited by the unpredictability of variable reinforcement schedules, because the changes in behavior were slow and gradual even when there was a large and sudden change in reinforcer delays.  相似文献   

14.
Four pigeons and three ringneck doves responded on an operant simulation of natural foraging. After satisfying a schedule of reinforcement associated with search time, subjects could "accept" or "reject" another schedule of reinforcement associated with handling time. Two schedules of reinforcement were available, a variable interval, and a fixed interval with the same mean value. Food available in the session (a variable related to the energy budget) was manipulated in the different conditions either by increases of the value of the search state schedule of reinforcement, or by increases in the mean value of the handling state schedules. The results indicate that the amount of food available in the session did not affect the preference for variable schedules of reinforcement, as would be predicted by an influential theory of risk sensitive foraging. Instead, the preference for variability depended on the relationship between the time spent in the search and the handling states, as is predicted by a family of models of choice that are based on the temporal proximity to the reinforcer.  相似文献   

15.
This study investigated generalization decrement during an extinction resistance-to-change test for pigeon key pecking using a two-component multiple schedule with equal variable-interval 3-min schedules and different reinforcer amounts (one component presented 2-s access to reinforcement and the other 8s). After establishing baseline responding, subjects were assigned to one of the two extinction conditions: hopper stimuli (hopper and hopper light were activated but no food was available) or Control (inactive hopper and hopper light). Responding in the 8-s component was more resistant to extinction than responding in the 2-s component, the hopper stimuli group was more resistant to extinction compared to the Control group, and an interaction between amount of reinforcement, extinction condition, and session block was present. This finding supports generalization decrement as a factor that influences resistance to extinction. Hopper-time data (the amount of time subjects spent with their heads in the hopper) were compared to resistance-to-change data in an investigation of the role of conditioned reinforcement on resistance to change.  相似文献   

16.
In previous research on resistance to change, differential disruption of operant behavior by satiation has been used to assess the relative strength of responding maintained by different rates or magnitudes of the same reinforcer in different stimulus contexts. The present experiment examined resistance to disruption by satiation of one reinforcer type when qualitatively different reinforcers were arranged in different contexts. Rats earned either food pellets or a 15% sucrose solution on variable-interval 60-s schedules of reinforcement in the two components of a multiple schedule. Resistance to satiation was assessed by providing free access either to food pellets or the sucrose solution prior to or during sessions. Responding systematically decreased more relative to baseline in the component associated with the satiated reinforcer. These findings suggest that when qualitatively different reinforcers maintain responding, relative resistance to change depends upon the relations between reinforcers and disrupter types.  相似文献   

17.
The development of a secondary reinforcer as a result of associating a neutral stimulus (buzzer) with intravenous (IV) doses of morpine was studied in rats. Secondary reinforcement developed in the absence of physical dependence and followed the association of the stimulus with either response-contingent or non-contingent injections of morphine. Strength of the conditioned reinforcer, measured in terms of responding on a lever for the stimulus plus infusion of saline solution, was proportional to the unit dosage of morphine employed in pairings of buzzer and drug. When extinction of the lever-press response for IV morphine was conducted (by substituting saline for morphine solution) in the absence of the conditioned reinforcing stimulus, it was seen later that the stimulus could still elicit lever responses, until it too had been present for a sufficient interval of non-reinforced responding. Similarly, extinction of the response for morphine by blocking its action with naloxone in the absence of the stimulus did not eliminate the conditioned reinforcement. Another study showed that a passive, subcutaneous (SC) dose of morphine served to maintain lever-pressing on a contingency of buzzer plus saline infusion. Furthermore, the stimuli resulting from the presence of morphine (after a SC injection) were able to reinstate the lever-responding with only the buzzer-saline contingency when such responses had previously been extinguished. Moreover, it was shown that d-amphetamine could restore responding under the same conditions, and that morphine could also do so for rats in which the primary reinforcer had been d-amphetamine. It is suggested that animal data such as these show that procedures designed for the elimination of human drug-taking behavior must take into account secondary reinforcers as well as the primary reinforcer(s).  相似文献   

18.
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.  相似文献   

19.
The present study investigated whether the sucrose-reinforced lever pressing of rats in the first half of a 50-min session would be sensitive to upcoming food-pellet reinforcement in the second half. In Experiment 1, the type of reinforcer in the first half of the session was always liquid sucrose and type of reinforcer in the second half (liquid sucrose or food pellets) varied across conditions. Sucrose concentration varied across groups (1, 5, or 25%). Results showed that rates and patterns of responding for 1%, and sometimes for 5%, sucrose reinforcers in the first half of the session were higher and steeper, respectively, when food-pellet, rather than sucrose, reinforcement occurred in the second half. Responding for 25% sucrose was not similarly affected. Experiment 2 replicated the results of Experiment 1 using a within-subjects design. Although the present results represent induction (i.e. the opposite of contrast), they are consistent with some results on consummatory contrast. They also further demonstrate that responding on interval schedules of reinforcement can be altered prospectively. By doing so, however, they pose potential problems for current theories for why operant response rates change within the session.  相似文献   

20.
Interval timing is a key element of foraging theory, models of predator avoidance, and competitive interactions. Although interval timing is well documented in vertebrate species, it is virtually unstudied in invertebrates. In the present experiment, we used free-flying honey bees (Apis mellifera ligustica) as a model for timing behaviors. Subjects were trained to enter a hole in an automated artificial flower to receive a nectar reinforcer (i.e. reward). Responses were continuously reinforced prior to exposure to either a fixed interval (FI) 15-sec, FI 30-sec, FI 60-sec, or FI 120-sec reinforcement schedule. We measured response rate and post-reinforcement pause within each fixed interval trial between reinforcers. Honey bees responded at higher frequencies earlier in the fixed interval suggesting subject responding did not come under traditional forms of temporal control. Response rates were lower during FI conditions compared to performance on continuous reinforcement schedules, and responding was more resistant to extinction when previously reinforced on FI schedules. However, no “scalloped” or “break-and-run” patterns of group or individual responses reinforced on FI schedules were observed; no traditional evidence of temporal control was found. Finally, longer FI schedules eventually caused all subjects to cease returning to the operant chamber indicating subjects did not tolerate the longer FI schedules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号