首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Transformation of expectation phenomenon into the phenomenon of temporal regulation is usually achieved by the suppression of preliminary factors or by means of their physical modification. Our studies show that such transformation can be obtained in dogs using Kupalov paradigm with the presentation of additional stimuli. These stimuli strictly identical, from the physical point of view, to the signals which interrupt expectation are randomly introduced into the temporal limit. Absence of the reinforcement in response to the additional stimulus impels the animal to include temporal regulation in its behaviour, and an additional negative discriminative stimulus promotes an expression of active character of inhibition. These circumstances make our pattern closer to DRRD (differential reinforcement of response duration). In order to evaluate the merits of this procedure the influence was studied of anxiolytic (diazepam) and neuroleptic (closepine) on the stabilized reaction of experimental animals. The increase of responses duration by closepine and their shortening by diazepam as well as the influence of these pharmacological substances on the frequency of responses in dependence of dose, confirm the results of the previous studies of DRRD and DRL (differential reinforcement of low rate of responses) and prove differential sensitivity of our procedure to pharmacological substances.  相似文献   

2.
This study evaluated the effect of a signal on resistance to change using a multiple schedule of reinforcement. Experiment 1 presented pigeons with three schedules: a signaled delay to reinforcement schedule (a two-link chain schedule with a variable-interval 120-s initial link followed by a 5-s fixed-time schedule), an unsignaled delay schedule (a comparable two-link tandem schedule), and an immediate, zero-delay variable-interval 125-s schedule. Two separate disruption procedures assessed resistance to change: extinction and adding a variable-time 20-s schedule of reinforcement to the inter-component interval. Resistance to change tests were conducted twice, once with the signal stimulus (the terminal link of the chain schedule) present and once with it absent. Results from both disruption procedures showed that signal absence reduced resistance to change for the pre-signal stimulus. In probe choice tests subjects strongly preferred the signal stimulus over the unsignaled stimulus and exhibited no reliable preference when given a choice between the signal stimulus and immediate stimulus. Experiment 2 presented two equal signaled schedules where, during resistance to change tests, the signal remained for one schedule and was removed for the second. Resistance to change was consistently lower when the signal was absent.  相似文献   

3.
Past studies using the concurrent-chain procedure showed that pigeons and humans generally prefer an unsegmented schedule to a segmented schedule. This finding is ostensibly inconsistent with theories of conditioned reinforcement such as delay-reduction theory. In the present study with humans, two changes in the basic segmented schedule were implemented to resolve this inconsistency. The first change was that in the segmented schedule the terminal-link stimulus (S+ stimulus) changed late in the terminal-link, close to reinforcement presentation. The second change was that the presentation of the segmenting stimulus, S+, was brief allowing a reinstatement of the early terminal-link stimulus, which is contiguous with reinforcement. Our data constitute the first demonstration of preference for the segmented schedule when a brief S+ is correlated with a greater reduction in delay to reinforcement.  相似文献   

4.
Key pecking of pigeons was maintained by a fixed-interval (FI) 61-s schedule. The effects of resetting and nonresetting unsignaled delays of reinforcement then were examined. The resetting delay was programmed as a differential-reinforcement-of-other-behavior schedule, and the nonresetting delay as a fixed-time schedule. Three delay durations (0.5, 1 and 10 s) were examined. Overall response rates were decreased by one and 10-s delays and increased by 0.5-s delays. Response patterns changed from positively accelerated to more linear when resetting or nonresetting 10-s delays were imposed, but remained predominantly positively accelerated when resetting and nonresetting 0.5- and 1-s delays were in effect. In general, temporal control, as measured by quarter-life values, changed less than overall response rates when delays of reinforcement were in effect. The response patterns controlled by FI schedules are more resilient to the nominally disruptive effects of delays of reinforcement than are corresponding overall response rates.  相似文献   

5.
Behavioral momentum theory is an evolving theoretical account of the strength of behavior. One challenge for the theory is determining the role of signal stimuli in determining response strength. This study evaluated the effect of an unsignaled delay between the initial link and terminal link of a two-link chain schedule on resistance to change using a multiple schedule of reinforcement. Pigeons were presented two different signaled delay to reinforcement schedules. Both schedules employed a two-link chain schedule with a variable interval 120-s initial link followed by a 5-s fixed time terminal link schedule. One of the schedules included a 5-s unsignaled delay between the initial link and the terminal link. Resistance to change was assessed with two separate disruption procedures: extinction and adding a variable time 20-s schedule of reinforcement to the inter-component interval. Baseline responding was lower in the schedule with the unsignaled delay but resistance to change for the initial link was unaffected by the unsignaled delay. The results suggest that not all unsignaled delays are equal in their effect on resistance to change.  相似文献   

6.
This study explored whether load auditory stimuli could be used as functional punishing stimuli in place of electric shock. Three experiments examined the effect of a loud auditory stimulus on rats’ responding maintained by a concurrent reinforcement schedule. In Experiment 1, overall response rate decreased when a concurrent 1.5 s tone presentation schedule was superimposed on the concurrent variable interval (VI) 180-s, VI 180-s reinforcement schedule. On the contrary, response rate increased when a click presentation schedule was added. In Experiment 2, the extent of the response suppression with a 1.5 s tone presentation varied as a function of the frequency of the reinforcement schedule maintaining responses; the leaner the schedule employed, the greater the response suppression. In Experiment 3, response suppression was observed to be inversely related to the duration of the tone; response facilitation was observed when a 3.0-s tone was used. In Experiments 1 and 2, a preference shift towards the alternative with richer reinforcement was observed when the tone schedule was added. In contrast, the preference shifted towards the leaner alternative when the click or longer duration stimulus was used. These results imply that both the type and duration of a loud auditory stimulus, as well as the reinforcement schedule maintaining responses, have a critical role in determining the effect of the stimuli on responding. They also suggest that a loud auditory stimulus can be used as a positive punisher in a choice situation for rats, when the duration of the tone is brief, and the reinforcement schedule maintaining responses is lean.  相似文献   

7.
Reinforcement Omission Effects (ROEs), indicated by higher rate of responses after nonreinforced trials in a partial reinforcement schedule, have been interpreted as behavioral transient facilitation after nonreinforcement induced by primary frustration, and/or behavioral transient inhibition after reinforcement induced by demotivation or temporal control. The size of the ROEs should depend directly on the reinforcement magnitude. The present experiment aimed to clarify the relationship between reinforcement magnitude and the omission effects manipulating the magnitude linked to discriminative stimuli in a partial reinforcement FI schedule. The results showed that response rates were higher after omission than after reinforcement delivery. Besides, response rates were highest immediately after the reinforcement omission of a larger magnitude than of a smaller magnitude. These data are interpreted in terms of ROEs multiple process behavioral facilitation after nonreinforcement, and behavioral transient inhibition after reinforcement.  相似文献   

8.
Probability analysis was carried out of the appearance of single elements of rats behaviour in the process of extinction of a conditioned alimentary motor reflex. The dynamics of effector behavioural components at a sudden cessation of reinforcement (usual schedule of extinction) was compared with cessation of reinforcement signalled by a previously differentiated signal and with reinforcement cessation preceded by a stimulus initially unknown to the animal. If the reinforcement cessation is signalled by a previously differentiated (negative) stimulus, in response to its action the animals "loose the aim", what is revealed in a rapid complete reduction of all elements of the goal-directed alimentary behaviour. Obviously differentiation signal actualises the memory trace of "nonreinforcement" which was formed in the previous negative experience of the animal; this is revealed in accelerated inhibition of the alimentary motor reflex under extinction.  相似文献   

9.
IN DISCRETE TRIALS, PIGEONS WERE PRESENTED WITH TWO ALTERNATIVES: to wait for a larger reinforcer, or to respond and obtain a smaller reinforcer immediately. The choice of the former was defined as self-control, and the choice of the latter as impulsiveness. The stimulus that set the opportunity for an impulsive choice was presented after a set interval from the onset of the stimulus that signaled the waiting period. That interval increased or decreased from session to session so that the opportunity for an impulsive choice became available either more removed from or closer in time to the presentation of the larger reinforcer. In three separate conditions, the larger reinforcer was delivered according to either a fixed interval (FI) schedule, a fixed time (FT) schedule, or a differential reinforcement of other behavior (DRO) schedule. The results showed that impulsive choices increased as the opportunity for such a choice was more distant in time from presentation of the larger reinforcer. Although the schedule of the larger reinforcer affected the rate of response in the waiting period, the responses themselves had no effect on choice unless the responses postponed presentation of the larger reinforcer.  相似文献   

10.
Pigeons' responses were recorded in successive 15-s subintervals of 60-s components of several multiple variable-interval schedules of food reinforcement. In the standard multiple schedule or successive discrimination, discriminative stimuli were present throughout each component. In the delayed discrimination or memory procedure, red or green stimuli were present in the first 15 s of components and were followed by a white stimulus for the remainder of both components. Ratios of responses in the first 15 s of the two components, where discriminative stimuli were present, were sensitive to ratios of reinforcers obtained in the two components, to the same extent in both multiple and memory procedures. In both procedures, sensitivity to reinforcement decreased systematically over component subintervals, but to a greater extent in the memory procedure where discriminative stimuli were absent. The reduction in sensitivity with time since presentation of prior discriminative stimuli in the memory procedure was therefore influenced by two main factors: delayed stimulus control by the discriminative stimuli presented earlier in the component, and a decrease in sensitivity to reinforcement with increasing time since component alternation.  相似文献   

11.
The effect of signals on resistance to change was evaluated using pigeons responding on a three-component multiple schedule. Each component contained a variable-interval initial link followed by a fixed-time terminal link. One component was an unsignaled-delay schedule, and two were equivalent signaled-delay schedules. After baseline training, resistance to change was assessed through (a) extinction and (b) adding free food to the intercomponent interval. During these tests, the signal stimulus from one of the signaled-delay components (SIG-T) was replaced with the initial-link stimulus from that component, converting it to an unsignaled-delay schedule. That signal stimulus was added to the delay period of the unsignaled-delay component (UNS), converting it to a signaled-delay schedule. The remaining signaled component remained unchanged (SIG-C). Resistance-to-change tests showed removing the signal had a minimal effect on resistance to change in the SIG-T component compared to the unchanged SIG-C component except for one block during free-food testing. Adding the signal to the UNS component significantly increased response rates suggesting that component had low response strength. Interestingly, the direction of the effect was in the opposite direction from what is typically observed. Results are consistent with the conclusion that the signal functioned as a conditioned reinforcer and inconsistent with a generalization-decrement explanation.  相似文献   

12.
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.  相似文献   

13.
Four rats were exposed to two different stimuli (either lights or tones), each stimulus being correlated with independent probabilities of water delivery in a temporally defined schedule. The schedule consisted of a 60 s T cycle with 30 s t(D) and t(-) successive subcycles; t(D) was correlated with a probability of water delivery of 1.0 and t(-) was correlated with a probability of water delivery of 0.0. The schedule was maintained during 180 sessions and extended for 25 extra sessions omitting the stimulus in t(-). The four rats showed low frequencies of responding, response frequency being slightly higher in t(-) than in t(D). The percentage of lost reinforcers was independent of response frequency. The rats which lost less reinforcers were those which obtained more water deliveries during the first 15 cycles of each session. These results show that stimulus control does not develop in limited-hold temporal schedules, and that response-reinforcer effectiveness may depend on the initial contact with reinforcers in the first cycles of the session.  相似文献   

14.
The article deals with response rates (mainly running and peak or terminal rates) on simple and on some mixed-FI schedules and explores the idea that these rates are determined by the average delay of reinforcement for responses occurring during the response periods that the schedules generate. The effects of reinforcement delay are assumed to be mediated by a hyperbolic delay of reinforcement gradient. The account predicts that (a) running rates on simple FI schedules should increase with increasing rate of reinforcement, in a manner close to that required by Herrnstein's equation, (b) improving temporal control during acquisition should be associated with increasing running rates, (c) two-valued mixed-FI schedules with equiprobable components should produce complex results, with peak rates sometimes being higher on the longer component schedule, and (d) that effects of reinforcement probability on mixed-FI should affect the response rate at the time of the shorter component only. All these predictions were confirmed by data, although effects in some experiments remain outside the scope of the model. In general, delay of reinforcement as a determinant of response rate on FI and related schedules (rather than temporal control on such schedules) seems a useful starting point for a more thorough analysis of some neglected questions about performance on FI and related schedules.  相似文献   

15.
Cat's differential duration threshold was investigated by the method of limits in a schedule of discrimination of empty durations. The standard stimulus was 4 sec long throughout the experiment. The comparison stimulus was reduced from 10 to 5 sec by 1 sec steps in successive blocks of 5 sessions. Standard and comparison stimuli, delimited by 50 msec auditory signals, were equiprobably distributed, in a random sequential order of presentation in each trial. After a 2 sec delay, an auditory signal indicated that reinforcement was available upon a response on one of two levers. Weber fractions around .25 were obtained. Strong response bias developed in most cats. Some consequences of the inhibition of responding induced by the procedure were considered.  相似文献   

16.
Two major dimensions of any contingency of reinforcement are the temporal relation between a response and its reinforcer, and the relative frequency of the reinforcer given the response versus when the response has not occurred. Previous data demonstrate that time, per se, is not sufficient to explain the effects of delay-of-reinforcement procedures; needed in addition is some account of the events occurring in the delay interval. Moreover, the effects of the same absolute time values vary greatly across situations, such that any notion of a standard delay-of-reinforcement gradient is simplistic. The effects of reinforcers occurring in the absence of a response depend critically upon the stimulus conditions paired with those reinforcers, in much the same manner as has been shown with Pavlovian contingency effects. However, it is unclear whether the underlying basis of such effects is response competition or changes in the calculus of causation.  相似文献   

17.
Amphetamine and it analogs have been shown to affect operant behavior maintained on the differential reinforcement of a low-rate (DRL) schedule. The aim of the present study was to investigate what specific component of the DRL response is affected by d-amphetamine. The acute effects of d-amphetamine on a DRL task were compared with those of the selective dopamine D1 and D2 receptor antagonists, SCH23390 and raclopride, respectively. Pentylenetetrazole and ketamine were also used as two reference drugs for comparison with d-amphetamine as a psychostimulant. Rats were trained to press a lever for water reinforcement on a DRL 10-s schedule. Acute treatment of d-amphetamine (0, 0.5, and 1.0 mg/kg) significantly increased the response rate and decreased the reinforcement in a dose-related fashion. It also caused a horizontal leftward shift in the inter-response time (IRT) distribution at the doses tested. Such a shifting effect was confirmed by a significant decrease in the peak time, while the mean peak rate and burse response remained unaffected. In contrast, both SCH23390 (0, 0.05, and 0.10 mg/kg) and raclopride (0, 0.2, and 0.4 mg/kg) significantly decreased the total, non-reinforced, and burst responses. The de-burst IRT distributions were flattened out as shown by the dose-related decreases in the mean peak rate for both dopamine antagonists, but no dramatic shift in peak time was detected. Interestingly, neither pentylenetetrazole (0, 5, and 10 mg/kg) nor ketamine (0, 1, and 10 mg/kg) disrupted the DRL behavioral performance. It is then conceivable that d-amphetamine at the doses tested affects the temporal regulation of DRL behavior. The effectiveness of d-amphetamine is derived from its drug action as a psychostimulant. Taken together, these data suggest that different behavioral components of DRL task are differentially sensitive to pharmacological manipulation.  相似文献   

18.
Temporal regulation in cats was studied using operant conditioning techniques; milk was delivered according to either a two-lever response duration schedule or a two-lever DRL schedule. After an equal number of experimental sessions, the subjects on the DRL schedule reached larger delay values than the subjects on the response duration schedule. Hypotheses concerning the intervention of proprioceptive feedback in the temporal regulation of motor behavior are discussed.  相似文献   

19.
Dopaminergic models based on the temporal-difference learning algorithm usually do not differentiate trace from delay conditioning. Instead, they use a fixed temporal representation of elapsed time since conditioned stimulus onset. Recently, a new model was proposed in which timing is learned within a long short-term memory (LSTM) artificial neural network representing the cerebral cortex (Rivest et al. in J Comput Neurosci 28(1):107–130, 2010). In this paper, that model’s ability to reproduce and explain relevant data, as well as its ability to make interesting new predictions, are evaluated. The model reveals a strikingly different temporal representation between trace and delay conditioning since trace conditioning requires working memory to remember the past conditioned stimulus while delay conditioning does not. On the other hand, the model predicts no important difference in DA responses between those two conditions when trained on one conditioning paradigm and tested on the other. The model predicts that in trace conditioning, animal timing starts with the conditioned stimulus offset as opposed to its onset. In classical conditioning, it predicts that if the conditioned stimulus does not disappear after the reward, the animal may expect a second reward. Finally, the last simulation reveals that the buildup of activity of some units in the networks can adapt to new delays by adjusting their rate of integration. Most importantly, the paper shows that it is possible, with the proposed architecture, to acquire discharge patterns similar to those observed in dopaminergic neurons and in the cerebral cortex on those tasks simply by minimizing a predictive cost function.  相似文献   

20.
The effects of drugs on punished responding depend on interactions among a large number of experimental variables. Among these variables are the drug history of the animal, the dose of the drug administered, the type of stimulus used to punish responding, the intensity and duration of the punishing stimulus, the schedule of presentation of the punishing stimulus, the control rate and pattern of punished responding, the schedule of positive reinforcement maintaining the punished responding, the species of animal, the deprivation state of the animal, the behavioral history of the animal, and the nature of the required response. Although it is not known how all of these variables interact to determine the effect of drugs on punished responding, there is evidence that many of these variables are important as determinants of drug effects. The task facing behavioral pharmacologists studying drug effects on punished responding is to determine under what conditions drugs produce their characteristic effects on punished responding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号