The Kalman filter, originally developed for engineering applications, has been applied to psychological learning as a model of adaptive belief updating in changing environments. It extends Bayesian inference to sequential estimation, providing an optimal rule for tracking a hidden state that drifts over time — such as the changing reward probability in a volatile environment.
Kalman Filter as Learning Model
Kalman gain: K = σ²_pred / (σ²_pred + σ²_obs)
Update: μ_new = μ_pred + K · (observation − μ_pred)
Uncertainty: σ²_new = (1 − K) · σ²_pred
The Kalman gain K plays the role of the learning rate. Crucially, K is not fixed but adapts automatically: when prediction uncertainty is high (σ²_pred large), K approaches 1 and the learner relies heavily on the new observation. When observations are noisy (σ²_obs large), K is small and the learner relies more on prior predictions. This adaptive learning rate captures the empirical finding that people learn more from surprising outcomes in volatile environments.
Psychological Applications
Kalman filter models have been applied to reward learning, social learning, anxiety (where σ²_drift may be overestimated, leading to excessive updating), and perceptual tracking. They provide a principled alternative to the fixed-learning-rate models (Rescorla-Wagner, Q-learning) and capture individual differences in learning through variation in the model's noise parameters rather than an arbitrary learning rate.