首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 846 毫秒
1.
Biofeedback was used to increase forearm-muscle tension. Feedback was delivered under continuous reinforcement (CRF), variable interval (VI), fixed interval (FI), variable ratio (VR), and fixed ratio (FR) schedules of reinforcement when college students increased their muscle tension (electromyograph, EMG) above a high threshold. There were three daily sessions of feedback, and Session 3 was immediately followed by a session without feedback (extinction). The CRF schedule resulted in the highest EMG, closely followed by the FR and VR schedules, and the lowest EMG scores were produced by the FI and VI schedules. Similarly, the CRF schedule resulted in the greatest amount of time-above-threshold and the VI and FI schedules produced the lowest time-above-threshold. The highest response rates were generated by the FR schedule, followed by the VR schedule. The CRF schedule produced relatively low response rates, comparable to the rates under the VI and FI schedules. Some of the data are consistent with the partial-reinforcement-extinction effect. The present data suggest that different schedules of feedback should be considered in muscle-strengthening contexts such as during the rehabilitation of muscles following brain damage or peripheral nervous-system injury.  相似文献   

2.
Rates of responding by rats were usually higher during the variable interval (VI) 30-s component of a multiple VI 30-s fixed interval (FI) 30-s schedule than during the same component of a multiple VI 30-s VI 30-s schedule (Experiment 1). Response rates were also usually higher during the FI 30-s component of a multiple VI 30-s FI 30-s schedule than during the same component of a multiple FI 30-s FI 30-s schedule (Experiment 2). The differences in response rates were not observed when the components provided VI or FI 120-s schedules. These results were predicted by the idea that differences in habituation to the reinforcer between multiple schedules contribute to behavioral interactions, such as behavioral contrast. However, differences in habituation were not apparent in the within-session patterns of responding. Finding differences in response rates in both experiments violates widely-held assumptions about behavioral interactions, including that behavioral contrast does not occur for rats and that improving the conditions of reinforcement decreases, rather than increases, response rate in the alternative component.  相似文献   

3.
Three experiments examined behavior in extinction following periodic reinforcement. During the first phase of Experiment 1, four groups of pigeons were exposed to fixed interval (FI 16 s or FI 48 s) or variable interval (VI 16 s or VI 48 s) reinforcement schedules. Next, during the second phase, each session started with reinforcement trials and ended with an extinction segment. Experiment 2 was similar except that the extinction segment was considerably longer. Experiment 3 replaced the FI schedules with a peak procedure, with FI trials interspersed with non-food peak interval (PI) trials that were four times longer. One group of pigeons was exposed to FI 20 s PI 80 s trials, and another to FI 40 s PI 160 s trials. Results showed that, during the extinction segment, most pigeons trained with FI schedules, but not with VI schedules, displayed pause-peck oscillations with a period close to, but slightly greater than the FI parameter. These oscillations did not start immediately after the onset of extinction. Comparing the oscillations from Experiments 1 and 2 suggested that the alternation of reconditioning and re-extinction increases the reliability and earlier onset of the oscillations. In Experiment 3 the pigeons exhibited well-defined pause-peck cycles since the onset of extinction. These cycles had periods close to twice the value of the FI and lasted for long intervals of time. We discuss some hypotheses concerning the processes underlying behavioral oscillations following periodic reinforcement.  相似文献   

4.
Four pigeons and three ringneck doves responded on an operant simulation of natural foraging. After satisfying a schedule of reinforcement associated with search time, subjects could "accept" or "reject" another schedule of reinforcement associated with handling time. Two schedules of reinforcement were available, a variable interval, and a fixed interval with the same mean value. Food available in the session (a variable related to the energy budget) was manipulated in the different conditions either by increases of the value of the search state schedule of reinforcement, or by increases in the mean value of the handling state schedules. The results indicate that the amount of food available in the session did not affect the preference for variable schedules of reinforcement, as would be predicted by an influential theory of risk sensitive foraging. Instead, the preference for variability depended on the relationship between the time spent in the search and the handling states, as is predicted by a family of models of choice that are based on the temporal proximity to the reinforcer.  相似文献   

5.
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.  相似文献   

6.
Psychological distance to reward, or the segmentation effect, refers to the preference for a terminal link of a concurrent-chains schedule consisting of a simple reinforcement schedule (e.g. fixed interval [FI] 30s) relative to its chained-schedule counterpart (e.g. chained FI 15s FI 15s). This experiment was conducted to examine whether the segmentation effect is due to the number of terminal-link stimulus and response segments per se. Three pigeons pecked under a concurrent-chains schedule in which identical variable-interval (VI) schedules operated in the initial links. In each session, half the terminal-link entries followed one initial-link key and the other half followed the other initial-link key. The initial-link keys correlated with the different terminal links were manipulated across conditions. In the first three conditions, each terminal link contained a chained fixed-time (FT) FT schedule, and in the final three conditions, each terminal link contained a chained FI FI schedule. In each condition, in one terminal link (alternating), the order of two key colors correlated with the different schedule segments alternated across terminal-link entries, whereas in the other terminal link (constant), the order of two other key colors was identical for each entry. With the chained FT FT schedule terminal links, there was indifference between the alternating and constant terminal links within and across pigeons, as indexed by initial-link choice proportions. In addition, terminal-link response rates were relatively low. With the chained FI FI schedule terminal links, for each pigeon, there was relatively more preference for the alternating terminal link and terminal-link response rates increased relative to conditions with the chained FT FT schedule terminal links. These data suggest that the segmentation effect is not due simply to the number of terminal-link stimulus or response segments per se, but rather to a required period of responding during a stimulus segment that never is paired with reinforcement.  相似文献   

7.
Interval timing is a key element of foraging theory, models of predator avoidance, and competitive interactions. Although interval timing is well documented in vertebrate species, it is virtually unstudied in invertebrates. In the present experiment, we used free-flying honey bees (Apis mellifera ligustica) as a model for timing behaviors. Subjects were trained to enter a hole in an automated artificial flower to receive a nectar reinforcer (i.e. reward). Responses were continuously reinforced prior to exposure to either a fixed interval (FI) 15-sec, FI 30-sec, FI 60-sec, or FI 120-sec reinforcement schedule. We measured response rate and post-reinforcement pause within each fixed interval trial between reinforcers. Honey bees responded at higher frequencies earlier in the fixed interval suggesting subject responding did not come under traditional forms of temporal control. Response rates were lower during FI conditions compared to performance on continuous reinforcement schedules, and responding was more resistant to extinction when previously reinforced on FI schedules. However, no “scalloped” or “break-and-run” patterns of group or individual responses reinforced on FI schedules were observed; no traditional evidence of temporal control was found. Finally, longer FI schedules eventually caused all subjects to cease returning to the operant chamber indicating subjects did not tolerate the longer FI schedules.  相似文献   

8.
A behavioral-history procedure was used to study the function of terminal-link stimuli as conditioned reinforcers in multiple concurrent-chain schedules of reinforcement. First, three pigeons were exposed to multiple concurrent-chain schedules in which the two multiple-schedule components were correlated with a blue and a white stimulus, respectively. In each component the initial links were equal independent variable-interval (VI) 15 s schedules. A fixed-interval (FI) 10 s schedule operated on the red key in one terminal link while extinction operated on the green key in the alternative terminal link. When large preferences for the red stimulus had been established, two tests were conducted. In the terminal-link test, under new initial-link stimuli--purple and brown--an FI 10 s schedule operated for both the red and green terminal-link stimuli. In the subsequent initial-link test, the blue and white initial-link stimuli were reintroduced, and, as in the terminal-link test, FI 10s operated for both the red and the green terminal-link stimuli. In the terminal-link test, the three pigeons showed no preference for the terminal links with the red stimulus, but showed clear and consistent preferences for the red stimulus when blue and white stimuli were reintroduced as initial-link stimuli in the initial-link test. This suggests that there are multiple sources of control over initial-link response allocation in concurrent-chains, including control by both terminal- and initial-link stimuli.  相似文献   

9.
Human choice behavior was assessed in a concurrent-chain schedule, where two equal initial links (IL) each led to a distinct terminal-link (TL). One TL was associated with a fixed ratio schedule of reinforcement, while the other was associated with a bi-valued mixed ratio schedule of reinforcement, whose arithmetic mean equaled the Fixed TL schedule. The fixed component (FR50; FR25; FR5) was arranged to be equal to the alternative mixed component in each condition (FR1/99; FR1/49; FR1/9), and choice behavior was measured by proportion of responses to each IL. In addition, the IL duration varied across conditions (VI 30 s; VI 15 s; FI 1 s). Preference for the mixed option was observed with longer durations (e.g., when IL = VI 30 s and TL = FR1/99). Participants were relatively indifferent in other conditions, though the results suggested a monotonic increase in preference as either durations or programmed efforts increased. It is concluded that both choice and the conditioned reinforcement value of the mixed option is contextually based, so that the value of a stimulus correlated with an immediate reward (i.e., FR 1) is enhanced the greater the temporal context in which the FR1 is embedded.  相似文献   

10.
This study explored whether load auditory stimuli could be used as functional punishing stimuli in place of electric shock. Three experiments examined the effect of a loud auditory stimulus on rats’ responding maintained by a concurrent reinforcement schedule. In Experiment 1, overall response rate decreased when a concurrent 1.5 s tone presentation schedule was superimposed on the concurrent variable interval (VI) 180-s, VI 180-s reinforcement schedule. On the contrary, response rate increased when a click presentation schedule was added. In Experiment 2, the extent of the response suppression with a 1.5 s tone presentation varied as a function of the frequency of the reinforcement schedule maintaining responses; the leaner the schedule employed, the greater the response suppression. In Experiment 3, response suppression was observed to be inversely related to the duration of the tone; response facilitation was observed when a 3.0-s tone was used. In Experiments 1 and 2, a preference shift towards the alternative with richer reinforcement was observed when the tone schedule was added. In contrast, the preference shifted towards the leaner alternative when the click or longer duration stimulus was used. These results imply that both the type and duration of a loud auditory stimulus, as well as the reinforcement schedule maintaining responses, have a critical role in determining the effect of the stimuli on responding. They also suggest that a loud auditory stimulus can be used as a positive punisher in a choice situation for rats, when the duration of the tone is brief, and the reinforcement schedule maintaining responses is lean.  相似文献   

11.
The peak interval (PI) procedure is commonly used to evaluate animals' ability to produce timed intervals. It consists of presenting fixed interval (FI) schedules in which some of the trials are replaced by extended non-reinforced trials. Responding will often resume (resurge) at the end of the non-reinforced trials unless precautions are taken to prevent it. Response resurgence was replicated in rats and pigeons. Variation of the durations of the FI and the non-reinforced probe trials showed it to be dependent on the time when reinforcement is expected. Timing of both the normal time to reinforcement, and the subsequent time to reinforcement during the probe trials followed Weber's law. A quantitative model of resurgence is described, suggesting how animals respond to the signaling properties of reinforcement omission. Model results were simulated using a stochastic binary counter.  相似文献   

12.
In Experiment 1, each of three humans knowledgeable about operant schedules used mouse clicks to respond to a "work key" presented on a monitor. On a random half of the presentations, work-key responses that completed a variable ratio (VR) 12 produced a tone. After five tones, the work key was replaced by two report keys. Pressing the right or left report key, respectively, added or subtracted yen50 from a counter and produced the work key. On the other half of the presentations, a variable interval (VI) associated with the work key was defined so its interreinforcer intervals approximated the time it took to complete the variable ratio. After five tone-producing completions of this schedule, the report keys were presented. Left or right report-key presses, respectively, added or subtracted yen50 from the counter. Subjects achieved high yen totals. In Experiment 2, the procedure was changed by requiring an interresponse time after completion of the variable interval that approximated the duration of the reinforced interresponse time on the variable ratio. Prior to beginning, subjects were shown how a sequence of response bouts and pauses could be used to predict schedule type. Subjects again achieved high levels of accuracy. These results show humans can discriminate ratio from interval schedules even when those schedules provide the same rate of reinforcement and reinforced interresponse times.  相似文献   

13.
Pigeons were trained in a concurrent chains procedure in which the terminal-link schedules in each session were either fixed-interval (FI) 10s FI 20s or FI 20s FI 10s, as determined by a pseudorandom binary series. The initial-link was a variable-interval (VI) 10-s schedule. Training continued until initial-link response allocation stabilized about midway through each session and was sensitive to the terminal-link immediacy ratio in that session. The initial-link schedule was then varied across sessions between VI 0.01 s and VI 30s according to an ascending and descending sequence. Initial-link response allocation was a bitonic function over the full range of durations. Preference for the FI 10-s terminal-link at first increased as programmed initial-link duration varied from 0.01 to 7.5s, and then decreased as initial-link duration increased to 30s. The bitonic function poses a potential challenge for existing models for steady-state choice, such as delay-reduction theory (DRT) [Fantino, E., 1969. Choice and rate of reinforcement. J. Exp. Anal. Behav. 12, 723-730], which predict a monotonic function. However, an extension of Grace and McLean's [Grace, R.C., McLean, A.P., 2006. Rapid acquisition in concurrent chains: evidence for a decision model. J. Exp. Anal. Behav. 85, 181-202] decision model predicted the bitonic function, and may ultimately provide an integrated account of choice in concurrent chains under both steady-state and dynamic conditions.  相似文献   

14.
Four pigeons responded under a progressive-delay procedure. In a signaled-delay condition, a chained variable interval (VI) 30-s progressive time (PT) 4-s schedule was arranged; in an unsignaled-delay condition, a tandem VI 30-s PT 4-s schedule was arranged. Two pigeons experienced a signaled-unsignaled-signaled sequence; whereas, two pigeons experienced an unsignaled-signaled-unsignaled sequence. Effects of saline and d-amphetamine were determined under each condition. At intermediate doses (1.0 and 1.78 m/kg) delay functions were shallower, area under the curve was increased, and, when possible, break points were increased compared to saline; these effects were not systematically related to signaling conditions. These effects on control by delay often were accompanied by decreased response rates at 0 s. These results suggest that stimulus conditions associated with the delay may not play a crucial role in effects of d-amphetamine and other stimulants on behavior controlled by reinforcement delay.  相似文献   

15.
Reward magnitude and delay to reward were independently manipulated in two separate experiments examining risk-sensitive choice in rats. A dual-running wheel apparatus was used and the tangential force resistance required to displace both wheels was low (50g) for half of the subjects, and high (120g) for the remaining subjects. Concurrent FI30-s and FI60-s schedules delivered equivalent amounts of food reward per unit time (i.e. 5 and 10 pellets of food, respectively), and these conditions served as the baseline treatment for all subjects. Variability, either in reward magnitude or delay, was introduced on the long-delay (60s) schedule during the second phase. All subjects were returned to the baseline condition in the third phase, and variability was introduced on the short-delay (30s) interval schedule during phase four. The subjects were again returned to the baseline condition in the fifth and final phase, ultimately yielding a five-phase ABACA design. Original baseline performance was characterized by a slight short-delay interval preference, and this pattern of performance was recovered with each subsequent presentation of the baseline condition. Overall, the data obtained from the reward magnitude and delay-to-reward manipulations were indistinguishable; subjects experiencing low-response effort requirement behaved in a risk-indifferent manner and subjects experiencing high-response effort requirement preferred the variable schedule. Implications for the daily energy budget rule on risk-sensitive foraging are discussed in light of these findings.  相似文献   

16.
The term "sensory reinforcer" has been used to refer to sensory stimuli (e.g. light onset) that are primary reinforcers in order to differentiate them from other more biologically important primary reinforcers (e.g. food and water). Acquisition of snout poke responding for a visual stimulus (5s light onset) with fixed ratio 1 (FR 1), variable-interval 1min (VI 1min), or variable-interval 6min (VI 6min) schedules of reinforcement was tested in three groups of rats (n=8/group). The VI 6min schedule of reinforcement produced a higher response rate than the FR 1 or VI 1min schedules of visual stimulus reinforcement. One explanation for greater responding on the VI 6min schedule relative to the FR 1 and VI 1min schedules is that the reinforcing effectiveness of light onset habituated more rapidly in the FR 1 and VI 1min groups as compared to the VI 6min group. The inverse relationship between response rate and the rate of visual stimulus reinforcement is opposite to results from studies with biologically important reinforcers which indicate a positive relationship between response and reinforcement rate. Rapid habituation of reinforcing effectiveness may be a fundamental characteristic of sensory reinforcers that differentiates them from biologically important reinforcers, which are required to maintain homeostatic balance.  相似文献   

17.
The literature on risk-sensitive foraging theory provides several accounts of species that fluctuate between risk-averse and risk-prone strategies. The daily energy budget rule suggests that shifts in foraging strategy are precipitated by changes in the forager's energy budget. Researchers have attempted to alter the organism's energy budget using a variety of techniques such as food deprivation, manipulation of ambient temperatures, and delays to food reward; however, response-effort manipulations have been relatively neglected. A choice preparation using a wheel-running response and rats examined risk-sensitive preferences when both response effort and reward amounts were manipulated. Concurrently available reinforcement schedules (FI/60 and VI/60) yielded equivalent food amounts per unit time in all treatments. Two levels of response effort (20 or 120 g tangential resistance) and two levels of reward amount (three or nine pellets) were combined to form four distinct response-effort/reward-amount pairings. Increasing reward amounts significantly shifted choice toward the FI schedule in both response-effort conditions. The incidence of choice preference and the magnitude of shifts in choice were greater for the high response-effort conditions than for the low response-effort conditions. Implications of the significant interaction between response effort and reward amount are discussed in terms of a general energy-budget model.  相似文献   

18.
Behavioral momentum theory is an evolving theoretical account of the strength of behavior. One challenge for the theory is determining the role of signal stimuli in determining response strength. This study evaluated the effect of an unsignaled delay between the initial link and terminal link of a two-link chain schedule on resistance to change using a multiple schedule of reinforcement. Pigeons were presented two different signaled delay to reinforcement schedules. Both schedules employed a two-link chain schedule with a variable interval 120-s initial link followed by a 5-s fixed time terminal link schedule. One of the schedules included a 5-s unsignaled delay between the initial link and the terminal link. Resistance to change was assessed with two separate disruption procedures: extinction and adding a variable time 20-s schedule of reinforcement to the inter-component interval. Baseline responding was lower in the schedule with the unsignaled delay but resistance to change for the initial link was unaffected by the unsignaled delay. The results suggest that not all unsignaled delays are equal in their effect on resistance to change.  相似文献   

19.
The article deals with response rates (mainly running and peak or terminal rates) on simple and on some mixed-FI schedules and explores the idea that these rates are determined by the average delay of reinforcement for responses occurring during the response periods that the schedules generate. The effects of reinforcement delay are assumed to be mediated by a hyperbolic delay of reinforcement gradient. The account predicts that (a) running rates on simple FI schedules should increase with increasing rate of reinforcement, in a manner close to that required by Herrnstein's equation, (b) improving temporal control during acquisition should be associated with increasing running rates, (c) two-valued mixed-FI schedules with equiprobable components should produce complex results, with peak rates sometimes being higher on the longer component schedule, and (d) that effects of reinforcement probability on mixed-FI should affect the response rate at the time of the shorter component only. All these predictions were confirmed by data, although effects in some experiments remain outside the scope of the model. In general, delay of reinforcement as a determinant of response rate on FI and related schedules (rather than temporal control on such schedules) seems a useful starting point for a more thorough analysis of some neglected questions about performance on FI and related schedules.  相似文献   

20.
Nicotine has been found to produce dose-dependent increases in impulsive choice (preference for smaller, sooner reinforcers relative to larger, later reinforcers) in rats. Such increases could be produced by either of two behavioral mechanisms: (1) an increase in delay discounting (i.e., exacerbating the impact of differences in reinforcer delays) which would increase the value of a sooner reinforcer relative to a later one, or (2) a decrease in magnitude sensitivity (i.e., diminishing the impact of differences in reinforcer magnitudes) which would increase the value of a smaller reinforcer relative to a larger one. To isolate which of these two behavioral mechanisms was likely responsible for nicotine's effect on impulsive choice, we manipulated reinforcer delay and magnitude using a concurrent, variable interval (VI 30 s, VI 30 s) schedule of reinforcement with 2 groups of Long-Evans rats (n = 6 per group). For one group, choices were made between a 1-s delay and a 9-s delay to 2 food pellets. For a second group, choices were made between 1 pellet and 3 pellets. Nicotine (vehicle, 0.03, 0.1, 0.3, 0.56 and 0.74 mg/kg) produced dose-dependent decreases in preference for large versus small magnitude reinforcers and had no consistent effect on preference for short versus long delays. This suggests that nicotine decreases sensitivity to reinforcer magnitude.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号