Added to Favorites

Related Searches

Nearby Words

In electrical engineering, computer science and information theory, channel capacity is the tightest upper bound on the amount of information that can be reliably transmitted over a communications channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. ## Formal definition

## Noisy-channel coding theorem

## Example application

## See also

## References

Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution

Let X represent the space of signals that can be transmitted, and Y the space of signals received, during a block of time over the channel. Let

- $p\_\{Y|X\}(y|x)$

be the conditional distribution function of Y given X. Treating the channel as a known statistic system, $p\_\{Y|X\}(y|x)$ is an inherent fixed property of the communications channel (representing the nature of the noise in it). Then the joint distribution

- $p\_\{X,Y\}(x,y)$

of X and Y is completely determined by the channel and by the choice of

- $p\_X(x)\; =\; int\_yp\_\{X,Y\}(x,y),dy$

the marginal distribution of signals we choose to send over the channel. The joint distribution can be recovered by using the identity

- $p\_\{X,Y\}(x,y)=p\_\{Y|X\}(y|x),p\_X(x)$

Under these constraints, next maximize the amount of information, or the message, that one can communicate over the channel. The appropriate measure for this is the mutual information $I(X;Y)$, and this maximum mutual information is called the channel capacity and is given by

- $C\; =\; sup\_\{p\_X\}\; I(X;Y),$

The noisy-channel coding theorem states that for any ε > 0 and for any rate R less than the channel capacity C, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε for a sufficiently long code. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.

An application of the channel capacity concept to an additive white Gaussian noise channel with B Hz bandwidth and signal-to-noise ratio S/N is the Shannon–Hartley theorem:

- $C\; =\; B\; log\; left(1+frac\{S\}\{N\}\; right)$

C is measured in bits per second if the logarithm is taken in base 2, or nats per second if the natural logarithm is used, assuming B is in hertz; the signal and noise powers S and N are measured in watts or volts^{2}, so the signal-to-noise ratio here is expressed as a power ratio, not in decibels (dB); since figures are often cited in dB, a conversion may be needed. For example, 30 dB is a power ratio of $10^\{30/10\}\; =\; 10^3\; =\; 1000$.

- Bandwidth (computing)
- Bandwidth (signal processing)
- Bit rate
- Code rate
- Error exponent
- Exformation
- Hartley's law
- Nyquist rate
- Negentropy
- Redundancy
- Sender, Encoder, Decoder, Receiver
- Shannon–Hartley theorem
- Spectral efficiency
- Throughput

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Sunday June 22, 2008 at 18:26:16 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Sunday June 22, 2008 at 18:26:16 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.