Mathematical Psychology
About

Bayesian Inference in Learning

Bayesian inference models of learning treat belief updating as the rational combination of prior knowledge with new evidence, providing an ideal-observer framework for understanding how organisms learn.

P(θ|data) ∝ P(data|θ) · P(θ)

Bayesian models of learning treat the learner as performing approximate Bayesian inference — combining prior beliefs with observed data to compute posterior beliefs. This framework provides a normative benchmark against which human learning can be evaluated: it specifies what an ideal learner would believe given the same information, allowing researchers to identify where and how human learning deviates from optimality.

The Bayesian Framework

Bayesian Learning Prior: P(θ) — beliefs before seeing data
Likelihood: P(data|θ) — probability of data given hypothesis
Posterior: P(θ|data) = P(data|θ)·P(θ) / P(data)
Predictive: P(x_new|data) = ∫ P(x_new|θ)·P(θ|data)dθ

Advantages over Error-Driven Models

Unlike Rescorla-Wagner, Bayesian models naturally represent uncertainty about learned parameters. A Bayesian learner knows not just the expected outcome but how confident it should be in that expectation. This uncertainty tracking enables optimal exploration (seek information where uncertainty is highest), appropriate learning rates (learn more from surprising events, less from expected ones), and one-shot learning (a single dramatic event can be highly informative).

Bayesian models of classical conditioning, causal learning, and category learning have shown that many phenomena previously attributed to associative mechanisms can also be understood as rational inference — though the two frameworks often make similar predictions, making them empirically difficult to distinguish.

Related Topics

References

  1. Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767–773. https://doi.org/10.1111/j.1467-9280.2006.01780.x
  2. Courville, A. C., Daw, N. D., & Touretzky, D. S. (2006). Bayesian theories of conditioning in a changing world. Trends in Cognitive Sciences, 10(7), 294–300. https://doi.org/10.1016/j.tics.2006.05.004
  3. Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285. https://doi.org/10.1126/science.1192788
  4. Jacobs, R. A., & Kruschke, J. K. (2011). Bayesian learning theory applied to human cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 2(1), 8–21. https://doi.org/10.1002/wcs.80

External Links