Paradigm

-Experimental Pattern

-Conceptual framework through which a scientist views the research in his field

Four Basic Experimental Paradigms

  • Habituation and Sensitization
  • Classical Conditioning
  • Instrumental Conditioning
  • Operant Conditioning

Habituation

-the decrease of an innate reaction to a stimulus due to repetition

-the unconditioned response decreases

-i.e. you hear the tone constantly and it eventually decreases in conditioning

-more effective with the repetition of a weak stimulus than a strong one

-shows stimulus generalization and spontaneous recovery

Dishabituation

-unconditioned stimulus is repeatedly presented, resulting in habituation of the unconditioned response

-Pavlov’s description – disinhibition: a recovery of the response, due to presentation of a strong stimulus like a loud noise

-a temporary increase in the strength of the response which leaves the underlying habituation intact

Extinction

-A learned response to a conditioned stimulus grows weaker because the conditioned stimulus is presented without the unconditioned stimulus

Elimination of the UCS and presentation of the CS alone results in extinction

Spontaneous Recovery

-if time is allowed to pass following extinction and the CS is again presented alone, the CR will show spontaneous recovery

Sensitization:

-The opposite of habituation

-The unconditioned response increases in responsiveness

-does not have long lasting effects

-more effective with the repetition of a strong stimulus

-A higher form of learning than habituation

Example of Habituation and Sensitization

Habituation: Your neighbor’s radio is playing at a low level and you get accustomed to it and it slowly doesn’t bother you

Sensitization: Your neighbor’s radio is playing very loudly and you become more and more distracted

Pseudoconditioning

-a result that is mistaken for conditioning

Classical Conditioning

-Pavlov and Twitmeyer

-Also known as: Pavlovian conditioning and respondent conditioning

-Process by which a neutral stimulus PCS) comes to be associated with another stimulus PUCS) that elicits a response PUCR) -CS + UCS = UCR

Unconditioned Stimulus PUCS)

-biologically potent stimulus that evokes an unlearned or reflexive reaction -i.e. food

Conditioned Stimulus PCS)

-biologically weak stimulus, initially neutral

-the CS may evoke an orienting response but not the strong response evoked by the US

Unconditioned Response PUCR)

-the unlearned response triggered by the US

-powerful and reflexive -i.e. salivation to food

Conditioned Response PCR)

-Elicited by the CS and represents the learned behavior

Temporal Arrangements of CS and UCS

Simultaneous: when the CS and UCS happen at the same time

Delay: the CS stays on until the UCS is presented Pa delay of .5-1.0 second is most effective)

Trace: the CS goes on and off some time before the UCS is presented

Backward: the UCS precedes the CS Pdifficult to achieve)

Temporal: there is no CS; the UCS is just repeated continuously

Higher-Order Conditioning

-Established in 2 stages

-Refers to a situation in which a stimulus that was previously neutral Pe.g., a light) is paired with a conditioned stimulus Pe.g., a tone that has been conditioning with food to produce salivating) to produce the same conditioned response as the conditioned stimulus.

  • CS1 Plight) + UCS Pfood) = salivation
  • CS2 Pbuzzer) + CS1 Plight) + no UCS

Compound Stimulus Conditioning

-a compound stimulus, composed of CS1 and CS2 is paired with the UCS

-configuring: when the animal only responds to the compound as a whole and not individual parts

Blocking

-prior conditioning of one component of the compound stimulus apparently prevents or blocks the conditioning of the other component

-i.e. if CS1 was presented in the first trial and CS2 is presented in the second trial, CS1 will have more of an effect to elicit the conditioned response PCR) whereas CS2 will not.

Instrumental Conditioning

-Edward Thorndike

-The behavior learned is instrumental in obtaining some desired outcome or reward Preinforcement)

-Trial-by-trial learning

-Ex: Puzzle Box Experiment

Thorndike’s Puzzle Box Experiment

-a hungry cat is placed in a puzzle box

-food is placed outside of the box

-the cat must press a lever to escape and get the food

Escape Latency: the amount of time it took for the correct response to occur

-Over time the cat’s latency became shorter and shorter Pas in, it took less time for the cat to press the lever to go and get the food)

-The reinforcers: the food and the confines of the box

Difference Between Instrumental Conditioning and Classical Conditioning

Instrumental: the reinforcer Psuch as food) is only presented when an animal first makes an appropriate response

-Classical: the reinforcer Punconditioned stimulus) does not depend on whether or not the animal first responds

Complex Maze

-start box: where the animal is placed in initially

-goal box: which contains the award

-pathway: links the start and the goal boxes

-experimenter measures the number of errors and the speed of the animal

T-Maze

-shaped like a T; has a start box at its base and a single choice point branching off toward two goal boxes

-places one reward, like sugar water, in one goal box and plain water in another goal box -used to study preferences and discrimination learning

The Straight-Alley Maze

-start box and goal box with runway in between

-measures response latency Ptime) and speed

-can also be used to study discrimination learning Pgo – no go discrimination: a tone is sounded when the animal is in the start box – high tone = food, low tone = empty)

Shuttle Box

-Studies the effects of aversive stimuli such as electric shock

-2 compartments separated by a barrier in which the animal must jump over

Escape learning: the response that will terminate the aversive stimulus with no warning Pi.e. jumping over the barrier)

-Avoidance learning: the animal predicts that the aversive stimulus is there, usually because of a warning signal – by responding, the animal can avoid contact with the aversive stimulus

Reinforcement

-a manipulation that, if it follows a particular response, increases the tendency of that response to occur again

Positive Reinforcer

-the presentation of a stimulus

Negative Reinforcer

-the removal or termination of a stimulus

-ex: escape learning Panimal is shocked and will escape the room because of the negative reinforcer)

Primary Reinforcers

-stimuli that are reinforcing because of the animal’s genetic makeup rather than experience

Primary Positive Reinforcers

appetitive stimuli that a deprived animal will approach, such as food, or water

Primary Negative Reinforcers

aversive stimuli that an animal will learn to avoid Psuch as a shock)

Punishment

-following a response with the presentation of an aversive stimulus

-i.e. aversive stimulus  response

Reinforcement Parameters Amount of Reward:

-the larger the reward, the greater the speed -the larger the reward, the shorter the latency Delay of Reward:

-the time between performance of the response and administration of reinforcement

Grice’s Rat Study on Black-White Discrimination

-the shorter the delay, the more quickly the animals learned

-the animals with a 10 second delay of reward showed no evidence of improvement over the 50% rate even after several hundred trials

Classical and Instrumental Conditioning Both Use:

-extinction

-spontaneous recovery

Continuous Reinforcement

-Rewarding an animal on every trial P100 percent)

Partial Reinforcement

-Rewarding the animal on some fraction of the trials

Partial Reinforcement Effect PPRE)

-partial reinforcement groups extinguish more slowly than those of a group receiving continuous reinforcement

-Ex: Bower’s Straight-Alley Runway Experiment: the running of a rat group that had been receiving 100% reinforcement extinguished more rapidly than those that had received 50% reinforcement or less. Therefore, partial reinforcement = less extinction

Operant Conditioning

-B. F. Skinner

-i.e. Skinner Box

-animal can make a response at any time

Skinner Box

-consists of a chamber, a floor, a food dispensing machine, and a manipulandum Pa lever)

-secondary stimuli Pdiscriminative stimuli): a tone or light

-behavior is recorded by a cumulative recorder and cumulative curve Pif animal responds at a fast rate = curve is steep, at a slow rate = curve is gradual)

Free Operant

-a response like pressing a lever, which the subject can repeat as often as desired

Two Areas of Investigation in Operant Conditioning

-schedules of reinforcement

-investigation of control of behavior by discriminative stimuli

Reinforcement Schedule

-a rule relating to the presentation of reward to the behavior of the animal

-continuous reinforcement: animal receives an award every time the correct response is made

-intermittent schedules: not all responses are reinforced

-ratio schedules: there is a predetermined ratio of responses to reinforcements Pthe more responses per unit time, the more reinforcements are received)

fixed-ratio schedule: every nth response is reinforced Pif n=5 then the animal must make 5 responses to get an reward)

-variable-ratio schedule: the animal must make a certain number of responses to obtain the reward but n varies from one reward to the next Pif n=5 the first time, then n might equal another number the next time) [ex: game of chance]

interval schedules: regardless of behavior, a certain period of time must follow a reward before another reward can be obtained Pstrictly limited time)

-fixed-interval schedules: responses are rewarded no more often than once every t seconds Pon a

60-second schedule, the animal can receive at the most one reinforcer every 60 seconds)

[scallop: following each reward, there is a pause and then a gradual increase in response rate] -variable-interval schedules: a certain time interval must elapse after each reward before another reward can be obtained, but the length of time varies

Extinction is fastest for: -continuous reinforcement

Extinction is slowest for:

-variable-ratio and variable-interval schedules

DRL Schedules Pdifferential reinforcement of a low rate of responding)

-reinforcement can only occur when an animal responds at a slow rate

Response Differentiation

-deliberate manipulation of characteristics of an animal’s response, such as its force

Stimulus Control

-if a stimulus is present whenever responding leads to reinforcement and is absent when it does not, the animal learns to respond only when the stimulus is present

Discriminative Stimuli PS   D and S   Δ)

 S   D

-the stimulus that signals availability of reward

-response rates are higher the more the test stimulus resembles the SD

 S   Δ -the stimulus that signals non-availability of reward

Peak Shift

-there is a shift in the peak of the generalization gradient

-the behavioral response bias arising from discrimination learning in which animals display a directional, but limited, preference for unusual stimuli or avoidance of unusual stimuli

Fading

-does not produce a peak shift

-an initial prompting to perform an action is gradually withdrawn until the need for it fades away -produces errorless discrimination learning

Animal Psychophysics

-the investigation of the sensory capabilities of animals

Summary: A Comparison of Paradigms

UCS: a stimulus reflexively eliciting an innate response

Habituation: a decrease in the strength of the UCR

Sensitization: an increase in the strength of the UCR Classical Conditioning: CS + UCS

Instrumental Conditioning: animal makes a response and is rewarded, either by positive or negative reinforcers

Operant Conditioning: the animal makes a response which is then reinforced; the response is freely available and there are no trials, which increases the rate or frequency with which the response is made

Habituation and Sensitization Compared with Conditioning

-Both display stimulus generalization and fading

-Habituation is like extinction of a conditioned response, decreases in response strength with repetition but also shows spontaneous recovery over time

-Dishabituation, produced by an intense extraneous stimulus, is like disinhibition in classical conditioning

Classical Conditioning vs. Instrumental and Operant Conditioning

-The UCS in Classical Conditioning plays a role similar to the reward in instrumental conditioning

-In classical conditioning, the reinforcer is presented just after the CS Pcontingent on the stimulus)

-In instrumental conditioning, the reinforcer is presented just after the response Pcontingent on the response)

-Instrumental conditioning will only occur if the response that is to be conditioned does not easily habituate

OR Porienting response):

-when an animal hears or feels something and he orients himself in a particular way, i.e. perks his ears or looks up

Autoshaping:

-a light is turned on shortly before the pigeons are given food

-the pigeons naturally peck Punconditioned response) at the food Punconditioned stimulus)

-through learning, the pigeons come to peck Pconditioned response) at the light source Pconditioned stimulus) that predicts food

Secondary Reinforcement: A Mixed Paradigm

-reinforcing power is acquired through learning

-secondary reinforcement refers to a situation in which a stimulus reinforces a behavior after it has been associated with a primary reinforcer

-Can be positive or negative

-Secondary Negative reinforcer: pairing a neutral stimulus with a primary negative reinforcer, such as a shock

-Money is one example of secondary reinforcement. Money can be used to reinforce behaviors because it can be used to acquire primary reinforcers such as food, clothing, shelter and other such things.

-Animal trainers sometimes use clickers as a type of secondary reinforcement. After pairing the sound of a clicker with praise or treats, the sound of the clicker alone can eventually work as a reinforcer.

Saltzman’s Secondary Reinforcement Experiment

  • rats are fed in the goal box
  • rats are given 15 trials of running a maze, with the choice between 2 paths
  • the rats were not fed in the maze, but they learned to pick the correct path that was associated with where they were fed originally

*the secondary reinforcers lose their power if they are not occasionally paired with primary reinforcers

Fear Conditioning

-the paradigm for demonstrating secondary negative reinforcement is often referred to as fear conditioning

-ex: the animal becomes afraid of a certain place because it was originally shocked there

Watson’s Little Albert Experiment:

-the baby performed an instrumental response Pcrawling away) to escape the white rat, which became a secondary reinforcer by being paired with the loud noise -example of secondary reinforcement fear conditioning