The speed-accuracy tradeoff has long been recognized as a fundamental property of decision making, but it was the drift diffusion model (DDM) that provided its most complete mechanistic explanation. In the DDM, the tradeoff emerges naturally from a single parameter: the boundary separation (a), which determines how much evidence must accumulate before a response is triggered.
Boundary Separation and the SAT
When boundary separation is large, the noisy evidence accumulation process must travel a greater distance before reaching a boundary, so responses are slow but the drift signal has time to dominate the noise, producing high accuracy. When boundary separation is small, the process reaches a boundary quickly, but noise has a larger influence relative to the signal, producing more errors.
E[RT_correct] = (a/(2v)) × tanh(va/s²) + Ter
v = drift rate, a = boundary separation, s = diffusion noise, Ter = non-decision time
Critically, the DDM predicts a specific form for the SAT: as boundary separation varies, the tradeoff between mean RT and accuracy traces out a curve that closely matches the empirical exponential-approach-to-a-limit SATF. This is not a coincidence — the DDM's architecture naturally produces this shape because accuracy is a saturating function of the amount of evidence accumulated.
Decomposing Experimental Effects
The DDM's most powerful contribution to understanding the SAT is its ability to decompose observed performance into latent cognitive components. When an experimental manipulation changes both RT and accuracy, the DDM can determine whether the change reflects:
Evidence quality (drift rate v): A change in the signal-to-noise ratio of the evidence. Stimulus difficulty and discriminability primarily affect drift rate.
Response caution (boundary separation a): A change in how much evidence is required. Speed/accuracy emphasis instructions primarily affect boundary separation.
Non-decision processes (Ter): A change in encoding or motor execution time. These change RT without affecting accuracy.
From a reward-rate perspective, there is an optimal boundary separation that maximizes the number of correct responses per unit time. Bogacz et al. (2006) showed that the DDM implements the statistically optimal decision procedure (the sequential probability ratio test) and that the optimal boundary separation depends on task parameters like drift rate, error cost, and inter-trial interval. Participants often approximate but do not perfectly achieve this optimum, typically being slightly too cautious.
Neural Implementation
Neuroscientific studies have confirmed the DDM's account of the SAT. Forstmann et al. (2008) used fMRI to show that speed emphasis (narrow boundaries) is associated with increased pre-stimulus baseline activity in the striatum, effectively reducing the distance to threshold. Single-neuron recordings in monkeys show that speed emphasis lowers the threshold firing rate at which neurons trigger a saccadic eye movement, directly corresponding to reduced boundary separation in the model.
The DDM's boundary separation parameter thus provides a bridge between the behavioral SAT and its neural implementation, unifying observations across multiple levels of analysis within a single coherent mathematical framework.