Bayesian models of learning treat the learner as performing approximate Bayesian inference — combining prior beliefs with observed data to compute posterior beliefs. This framework provides a normative benchmark against which human learning can be evaluated: it specifies what an ideal learner would believe given the same information, allowing researchers to identify where and how human learning deviates from optimality.
The Bayesian Framework
Likelihood: P(data|θ) — probability of data given hypothesis
Posterior: P(θ|data) = P(data|θ)·P(θ) / P(data)
Predictive: P(x_new|data) = ∫ P(x_new|θ)·P(θ|data)dθ
Advantages over Error-Driven Models
Unlike Rescorla-Wagner, Bayesian models naturally represent uncertainty about learned parameters. A Bayesian learner knows not just the expected outcome but how confident it should be in that expectation. This uncertainty tracking enables optimal exploration (seek information where uncertainty is highest), appropriate learning rates (learn more from surprising events, less from expected ones), and one-shot learning (a single dramatic event can be highly informative).
Bayesian models of classical conditioning, causal learning, and category learning have shown that many phenomena previously attributed to associative mechanisms can also be understood as rational inference — though the two frameworks often make similar predictions, making them empirically difficult to distinguish.