Designing Encodings

Noisy Channel Example

Note: in order to run the simulation referred to in this slide, go here, to the Java Applet version. You will be directed to download the latest version of the Java plug-in.

In a few cases, Shannon's theorem from the last slide can be demonstrated explicitly. In the example of a binary channel on the left, the noise agent alters the transmitted signal in a very particular way. The noise acts independently on every successive block of seven symbols. With equal probability, the noise agent chooses one bit out of the seven, or no bit at all, and reverses it. Without the noise, the channel capacity would be 1 bit per time unit,

With noise, we must take away H(X|Y), the uncertainty in X given Y. Given a block of seven output symbols, Y, there are 8 equally probable possibilities for X, so

Calculating the capacity per seven time units:

Shannon's theorem implies that if a source produces less than 4/7 bits per time unit, we can transmit the information across our channel with arbitrarily low probability of error. In fact, for this case, we can transmit 4 bits every 7 time steps with no errors at all.

To do this, we add three "parity bits" to our four input symbols, following the procedure of R. Hamming. These extra bits are computed as the parity of various sums of the input bits and inserted into the signal. These sums are constructed to ensure that any two resulting code words differ in at least three places. Thus, two different inputs can never result in the same output, even after the noise occurs. In fact, using linear algebra techniques, it is possible to recover the original signal efficiently from the noisy output. The example society to the left demonstrates this method. Press "Go" repeatedly and the message from the source will be transmitted without error. The details of the procedure follow below.

Sequences of four bits, written in the alphabet of the source - {A, B}, are translated into sequences of 7 bits in the alphabet of the channel - {0, 1}. When the 7 bits are passed through the noisy channel, the noise randomly flips one of the bits (or none at all). But the receiver isn't fooled, and uses the redundancy created between the source and the transmitter to form the decoded sequence of 4 bits that are identical to the ones that originated in the source. Notice that after many time steps, the transmitter and the receiver grow to be about 7/4 times the size of the source and the decoder, since the signal that passes through the channel is 7/4 times longer than the original message.

Shannon's theorem implies that we have found the most efficient coding. It is not possible to transmit information from our random source across this particular channel at a faster rate, unless we allow a certain fraction of errors.

Previous Slide                                                           Next Slide