In 1954, Paul Fitts published a paper that established one of the few genuine laws in psychology. He demonstrated that the time required to rapidly move to a target region is a linear function of the index of difficulty (ID), defined as the logarithm (base 2) of twice the movement amplitude divided by the target width. This law has held across an extraordinary range of tasks, effectors, and contexts for over seven decades.
The Mathematical Formulation
ID = log₂(2A / W)
where MT = movement time, A = amplitude (distance), W = target width
The constants a and b are empirically estimated via linear regression. The intercept a captures a baseline movement time, and the slope b (measured in ms/bit) reflects the information-processing capacity of the motor system. The index of difficulty ID is measured in "bits" — a direct connection to Shannon's information theory that Fitts explicitly drew upon.
Information-Theoretic Interpretation
Fitts framed aimed movement as a communication channel: the motor system must transmit enough information to specify the target location within the required tolerance. The throughput of this channel, often called the index of performance (IP = ID / MT), is remarkably constant across different ID levels, typically around 4–10 bits/second for hand movements. This information-theoretic interpretation was later refined by MacKenzie (1992), who proposed the Shannon formulation: ID = log₂(A/W + 1), which better handles extreme values and has superior statistical properties.
Fitts' Law became a cornerstone of human-computer interaction (HCI) design. It predicts the time to click on-screen targets, guiding decisions about button sizes, menu layouts, and interactive element placement. The law explains why targets at screen edges are easy to hit (effectively infinite width), why pie menus are faster than linear menus, and provides the theoretical basis for the ISO 9241-9 standard for evaluating pointing devices.
Underlying Mechanisms
Several models have been proposed to explain why Fitts' Law holds. The iterative corrections model (Crossman & Goodeve, 1963/1983) proposes that movements consist of a series of submovements, each covering a constant proportion of remaining distance. The stochastic optimized-submovement model (Meyer et al., 1988) derives Fitts' Law from optimal motor planning under signal-dependent noise — the nervous system plans movements to minimize time while keeping the probability of missing the target below a criterion. This latter derivation shows that Fitts' Law is not merely an empirical regularity but can emerge from optimality principles.
The law extends beyond simple hand movements to foot movements, head movements, eye saccades (with modifications), and even movements by people with motor disabilities, attesting to its remarkable generality as a description of the speed-accuracy tradeoff in the motor system.