The Rescorla-Wagner model, published in 1972, is the most influential formal model of classical conditioning. It captures the key insight that learning is driven by prediction error — organisms learn when events are surprising and stop learning when outcomes are fully predicted.
The Learning Rule
Vₐ = associative strength of CSₐ
αₐ = salience of CSₐ (0 to 1)
β = learning rate for the US (0 to 1)
λ = maximum conditioning supported by the US
ΣV = total associative strength of all CSs present
The term (λ − ΣV) is the prediction error: the discrepancy between what the US supports and what is currently predicted. When multiple conditioned stimuli are present, their combined associative strengths compete for a fixed amount of associative strength, naturally explaining phenomena like blocking and overshadowing.
Predictions and Successes
The model elegantly explains acquisition (V approaches λ asymptotically), extinction (V approaches 0 when λ = 0), blocking (no learning to a redundant predictor), and the fact that learning decelerates as V approaches λ. Its prediction error formulation directly inspired the temporal difference (TD) learning algorithm in reinforcement learning and the discovery that dopamine neurons encode reward prediction errors.
Limitations
The model cannot account for latent inhibition (pre-exposure retards conditioning), spontaneous recovery, or configural learning (where a compound AB has different properties than A or B alone). These limitations motivated successor models including Pearce-Hall (variable attention) and Mackintosh (attention to the best predictor).