Answer to Question 1
e
Answer to Question 2
Under continuous reinforcement, every occurrence of a targeted behavior results in a reinforcement. For example, the rat in the Skinner box is reinforced with one food pellet every time it presses the lever. Learning occurs quickly under continuous reinforcement, but extinction also occurs quickly. With a fixed-ratio schedule of reinforcement, the reinforcer is given after a predetermined number of responses is made. For example, under a FR-3 schedule, the rat in the Skinner box receives a food pellet after every third bar press. A variable-ratio schedule occurs when the number of correct responses required for the reinforcement varies around some predetermined number. For example, under a VR-5 schedule, the rat in the Skinner box may have to press the lever 8 times to get a reinforcement on one occasion, but on another occasion the first bar press results in a reinforcement. Over a large number of trials, the required number of bar presses averages to 5 . Variable-ratio schedules usually produce a very high, steady rate of responding, and are resistant to extinction. Under a fixed-interval schedule, reinforcement is given for the first correct response after a fixed amount of time has passed. For example, under a FI-15 schedule, the rat in a Skinner box will receive a food pellet for the first bar press after a 15-second timer has elapsed. The fixed-interval schedules frequently produced a scalloped response pattern, in which the frequency of responses drop after a reinforcement is given, then increase near the end of the interval. Elderly people sometimes display this pattern of behavior when checking the mail. They sometimes watch for the letter carrier and check their mailboxes several times as the time for mail delivery approaches, but once the letter carrier has come, they stop checking the mail until it is almost time for the next day's delivery. In a variableinterval schedule, the amount of time that must pass before a behavior results in a reinforcement is
allowed to vary from occasion to occasion. For example, a VI-30 schedule means that the period of time between reinforcements varies around an average of 30 seconds. On some trials the interval will be shorter, on others it will be longer. VI schedules
tend to produce slow, steady response rates, and tend to be more resistant to extinction than behaviors that are reinforced on a fixed-interval schedule.