Mathematical Psychology
About

Coding Theory

Shannon's source and channel coding theorems establish the fundamental limits of data compression and error-free communication, with applications to understanding neural coding and cognitive representation.

L* ≥ H(X) / log(D)

Shannon's coding theorems form the twin pillars of information theory. The source coding theorem (1948) establishes that data can be compressed to an average code length no shorter than the entropy of the source, while the channel coding theorem guarantees reliable communication at rates up to channel capacity. Together, these theorems define the fundamental limits of information processing and have inspired influential models of neural coding and cognitive representation.

Source Coding

Shannon's Source Coding Theorem Average code length: L = Σ p(xᵢ) · l(xᵢ)

L* ≥ H(X) for any uniquely decodable code

Huffman coding achieves: H(X) ≤ L_Huffman < H(X) + 1

H(X) = −Σ p(xᵢ) · log₂ p(xᵢ)

Source coding (data compression) assigns shorter codewords to more probable symbols and longer codewords to less probable symbols. Shannon proved that no uniquely decodable code can have an average length less than the source entropy H(X). Huffman (1952) provided an algorithm for constructing optimal prefix-free codes that come within one bit of the entropy bound. Shannon-Fano-Elias coding and arithmetic coding can approach the entropy bound even more closely for long sequences.

Channel Coding

Channel coding adds structured redundancy to messages so that errors introduced by the channel can be detected and corrected. Shannon proved that for any rate R below channel capacity C, codes exist that achieve arbitrarily low error probability. This seemingly paradoxical result — that redundancy can simultaneously allow both compression and error correction — was revolutionary. Practical error-correcting codes, from Hamming codes to turbo codes and low-density parity-check codes, have progressively approached Shannon's theoretical limits.

Neural Coding as Source Coding

The nervous system faces a problem analogous to source coding: representing the high-dimensional sensory world using a limited number of neurons and spikes. Barlow (1961) proposed that sensory neurons implement efficient coding by reducing redundancy in neural representations, analogous to compression. Laughlin (1981) showed that the contrast-response function of fly photoreceptors matches the transformation predicted by histogram equalization — an optimal source code for maximizing information about natural scenes given the neuron's limited dynamic range.

Applications to Cognitive Science

Coding theory has influenced cognitive science in several ways. The concept of "chunking" (Miller, 1956) can be understood as a recoding operation that groups items into higher-order units, effectively compressing information and expanding the apparent capacity of working memory. In language processing, speakers tend toward efficient coding: more predictable words are shorter (Zipf's law) and are articulated more rapidly, consistent with a communication system optimized near the source coding bound.

In neural population coding, the question of how populations of neurons represent information can be framed in coding-theoretic terms. Rate codes, temporal codes, and population codes correspond to different encoding strategies with different capacity-complexity tradeoffs. The study of neural codes continues to draw on Shannon's framework to ask how efficiently the brain represents and transmits information.

Related Topics

References

  1. Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x
  2. Huffman, D. A. (1952). A method for the construction of minimum-redundancy codes. Proceedings of the IRE, 40(9), 1098–1101. doi:10.1109/JRPROC.1952.273898
  3. Barlow, H. B. (1961). Possible principles underlying the transformations of sensory messages. In W. A. Rosenblith (Ed.), Sensory Communication (pp. 217–234). MIT Press. doi:10.7551/mitpress/9780262518420.003.0013
  4. Laughlin, S. (1981). A simple coding procedure enhances a neuron's information capacity. Zeitschrift für Naturforschung C, 36(9–10), 910–912. doi:10.1515/znc-1981-9-1040
  5. Cover, T. M., & Thomas, J. A. (2006). Elements of Information Theory (2nd ed.). Wiley-Interscience. doi:10.1002/047174882X

External Links