The speed-accuracy tradeoff is one of the most pervasive phenomena in cognitive psychology. Across virtually every task domain — perception, memory, language, reasoning — participants can trade speed for accuracy by adjusting their response caution. This tradeoff is not merely an empirical nuisance; it reveals the temporal dynamics of information processing and provides a window into the mechanism by which decisions are made.
Characterizing the Tradeoff
At the macro level, the SAT can be characterized by a speed-accuracy tradeoff function (SATF) that plots accuracy against processing time. Wickelgren (1977) proposed that the SATF follows an exponential approach to a limit:
d'(t) = 0, for t ≤ δ
λ = asymptotic accuracy
β = rate of approach to asymptote
δ = intercept (time at which accuracy departs from chance)
This three-parameter function captures the key features of the SAT: accuracy remains at chance for an initial period (δ), then rises at rate β toward an asymptotic level λ. Different experimental manipulations selectively affect different parameters: stimulus quality affects the rate β, while the amount of available information affects the asymptote λ.
Micro vs. Macro Tradeoff
The SAT manifests at two levels. The macro tradeoff occurs when participants adopt different strategies across blocks of trials — responding quickly in speed-emphasis blocks and carefully in accuracy-emphasis blocks. The micro tradeoff occurs within a single condition, where faster responses on individual trials tend to be less accurate. Both levels are captured by evidence accumulation models like the drift diffusion model, which explains the macro tradeoff via changes in boundary separation and the micro tradeoff via the stochastic nature of the accumulation process.
Failing to account for the SAT can lead to fundamentally wrong conclusions. If Condition A produces faster but less accurate responses than Condition B, we cannot conclude that A is "easier." The conditions may differ only in response caution, not in the quality of information processing. Only by mapping the full SAT function — or by using a model that separates evidence quality from response caution — can we determine whether processing truly differs between conditions.
Experimental Methods
Researchers induce the SAT experimentally using several techniques: speed/accuracy emphasis instructions (simple but produces only two points on the SATF), response signals at variable lags (forces responses at specific time points, mapping out the full SATF), and response deadlines (responses must occur before a deadline, truncating the RT distribution). The response signal method, developed by Reed (1973) and refined by Wickelgren, Corbett, and Dosher (1980), is considered the gold standard for mapping complete SAT functions.
The SAT is intimately connected to sequential sampling models of decision making. In the drift diffusion model, the SAT arises naturally: wider boundary separation requires more evidence (slower, more accurate), while narrower separation requires less (faster, less accurate). This provides a mechanistic account of why the tradeoff exists and predicts its specific quantitative form.