Mathematical Psychology
About

Item Information Function

The item information function quantifies how much statistical information a single item provides about the latent trait at each ability level, serving as the IRT analogue of reliability.

I_i(θ) = a_i² × P_i(θ) × Q_i(θ)

In item response theory, the concept of information replaces and generalizes classical test theory's reliability coefficient. While reliability is a single number summarizing measurement precision across the entire score range, the information function describes precision as a continuous function of the latent trait θ. An item provides maximum information near its difficulty level and diminishing information as the examinee's ability moves away from that point. This trait-dependent precision is one of IRT's most powerful features.

Definition and Derivation

Item Information (2PL Model) I_i(θ) = [P′_i(θ)]² / (P_i(θ) × Q_i(θ))

For the 2PL: I_i(θ) = a_i² × P_i(θ) × Q_i(θ)

where P_i(θ) = probability of correct response
Q_i(θ) = 1 − P_i(θ)
a_i = discrimination parameter

The item information function is derived from the Fisher information for the item's contribution to the likelihood. It equals the squared first derivative of the log-likelihood with respect to θ, or equivalently, the negative expected second derivative. For the 2PL model, this simplifies to a² × P(θ) × Q(θ), which is maximized when P(θ) = Q(θ) = 0.50 — that is, when θ equals the item difficulty b. At this point, the maximum information is a²/4.

Test Information and Standard Error

Under the assumption of local independence — that item responses are conditionally independent given θ — the test information function is simply the sum of item information functions across all items:

Test Information and Standard Error I(θ) = Σ_i I_i(θ)

SE(θ) = 1 / √I(θ)

Reliability at θ: ρ(θ) ≈ 1 − 1/I(θ)

The standard error of the ability estimate is the reciprocal of the square root of the test information. This provides a direct, ability-specific measure of measurement precision. A test may measure very precisely in the middle of the ability range (where most items provide information) but poorly at the extremes. This insight is invisible in classical test theory, which reports a single reliability coefficient.

Information and Adaptive Testing

The item information function is the engine of computerized adaptive testing. At each step, the CAT algorithm selects the item that provides maximum information at the examinee's current estimated ability level. By concentrating measurement where it matters most, adaptive tests achieve the same precision as conventional tests with far fewer items — typically 50% fewer. The information function also guides test assembly: automated test assembly algorithms select item sets that meet content specifications while maximizing information at critical score points (e.g., near pass/fail cut-scores).

Information under Different Models

The form of the information function depends on the IRT model. Under the 1PL (Rasch) model, all items have equal discrimination, so all item information functions have the same height (a²/4) and differ only in location. Under the 3PL model, the guessing parameter reduces information, especially at lower ability levels, because correct responses from low-ability examinees are ambiguous — they may reflect ability or guessing.

For polytomous items (scored in more than two categories), information functions are computed from the category response functions using the same Fisher information principle. The graded response model, partial credit model, and generalized partial credit model each yield distinct information function shapes. Polytomous items generally provide more information per item than dichotomous items because each response carries more statistical information about the latent trait. The item information function thus provides a unified, model-based metric for evaluating and comparing items regardless of their scoring format.

Interactive Calculator

Each row provides theta (person ability) and b (item difficulty). The calculator computes the probability of a correct response under the Rasch model: P = exp(θ−b) / (1+exp(θ−b)).

Click Calculate to see results, or Animate to watch the statistics update one record at a time.

Related Topics

References

  1. Baker, F. B. (2001). The basics of item response theory (2nd ed.). ERIC Clearinghouse on Assessment and Evaluation.
  2. Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Sage.
  3. Lord, F. M. (1980). Applications of item response theory to practical testing problems. Erlbaum.
  4. van der Linden, W. J. (2005). Linear models for optimal test design. Springer. doi:10.1007/0-387-29054-0

External Links