Designing Encodings

Main Menu           Previous Topic                                                           Next Topic


A Noisy Channel

Note: in order to run the simulation referred to in this slide, go here, to the Java Applet version. You will be directed to download the latest version of the Java plug-in.

In the familiar communication model to the left, we've inserted an extra agent into the middle of the channel, between transmitter and receiver. This agent will simulate noise, altering the message probabilistically before it is received. When you repeatedly press "Go," notice that a fraction of the original message is incorrectly recieved. Because of this, every transmitted input message yields a probability distribution over a set of possible output messages. The task of a receiver is to recover the input from the distorted output.

In order to combat noise, we can employ redundancy. We noted earlier that English text can be recovered even when many letters are randomly deleted. In the binary channel, we can combat noise, for instance, by employing repetition. For every A or B in the model to the left, we could send three 1's or 0's in a row instead of just one. In a sense, there is a certain duality between compression and transmission. Compression is the act of reducing redundancy. Transmission involves adding redundancy to combat noise. Unfortunately, adding redundancy through repetition is rather inefficient. To lower the probability of error, longer and longer encodings are required. Thus, the rate of transmission decreases quickly. In other words, if we try to combat noise through repetition, increasing the rate at which we want to transmit information always leads to a higher chance of error.

Shannon surprised the research community when he discovered that there are better ways to combat noise such that this is not the case. In fact, given a noisy channel, there is a characteristic rate, R, such that any information source with entropy (i.e. rate of information flow) less than R can be encoded so as to transmit across the channel with arbitrarily few errors. At rates above R, a certain positive rate of error is necessary.

Note that if there is no noise, this rate R reduces to the channel capacity, the maximum rate at which information flows across a noiseless channel. Because of this, we take R to be the logical generalization of channel capacity, C, for a noisy channel. If we let X be the channel input, Y be the channel output, Shannon demonstrates that this rate is equal to:

[5]

where

[6]

is the entropy of X given Y, the uncertainty of the input knowing the output. This is the uncertainty added by the noise. The maximum is taken over all possible inputs, X, in the alphabet of the source. So the entropy is the maximum, over possible sources, of the entropy of the source minus the uncertainty of the source given the output.

What does Shannon's achievement spell for purposes of practical communication? The capacity R of any noisy channel is given by equation [5]. If an information source produces information at any rate less than R, we can encode the source so that arbitrarily few errors occur through transmission. Shannon gives a general method of constructing such codes. To do this, he makes use of the notion of typical sequences. Unfortunately, Shannon's method is not practical for real channels. Researchers are still searching for codes that fulfill Shannon's theorem.

                   Previous Slide                                                           Next Slide