2020-10-09: Repetition Codes

This is the first in what I plan to be a series of posts about coding theory, looking at different techniques.

I am working through two books:

What is coding theory?

Coding theory attempts to solve the problem of reliably transmitting information from one location (or person), the transmitter, to a second location (or person), the receiver. The information is transmitted across a channel. The challenge is that this channel does not reliably transmit information, but may introduce noise, or other distortions.

For example, the transmitter may want to send the message "HELLO WORLD". The channel may introduce some noise, and the receiver then gets the message "HFLLP VOPLD".

A typical way of tackling this problem is to recode the message to include some redundancy. In this way, the channel probably cannot alter all the redundant parts of the message, and so the receiver can reconstruct what has been sent. This idea leads us to our first code.

Binary Repetition Code

For the time being, I will simplify the type of messages that can be transmitted. These messages will be sequences of binary digits, e.g. "100110". The noise that the channel can introduce will mean that a bit can be flipped, e.g. "101010" is a noisy version of the first binary message.

As we are talking about the channel, let us define how noise is added to a message. We make the following assumptions to define a _binary symmetric channel_:

As an example, if we want to transmit the message "100110", what is the probability that it will be received without errors? As p is the probability that an error will occur in any bit, there is a (1-p) probability that any individual bit will be received without error.

For the six bits in the message, that means a (1-p)6 probability that the message is received correctly.

Therefore, for a message of length 6 bits, and p = 0.01 (i.e. noise affects 1 in 100 of the transmitted bits):

The binary repetition code aims to reduce the error rate by sending each bit several times. For example, instead of transmitting the bit 1, we will transmit the bits 111, and instead of transmitting the bit 0, we will transmit the bits 000: the 111 and 000 are called the codewords.

The binary repetition code looks like this:

So the message "100110" is encoded as: "111000000111111000".

What is the advantage of this? The main one is that we can correct a certain amount of errors.

For example, if the message "101" is received, we can see that it is an error (because "101" is not either of our codewords: "000" or "111"). However, it is closer to "111" than it is to "000", and so we can decode the message into the codeword "111" with a reasonable expectation that we have decoded it correctly.

Notice how we rely on p being less than 0.5 here:

The "closeness" of the codeword to either all 1s or all 0s is defined as its weight, simply the sum of non-zero bits. If the weight is greater than half of the number of bits, then it is closer to being all 1s than to all 0s.

The weight of "101" is 2, and 2 is greater than 3/2=1.5, so "101" is closer to "111".

Word Error Probability

The word error probability defines the probability that a given codeword is received in error.

In the case without any coding, we were transmitting single bits: in effect, the 1 and 0 bits are their own codewords. We could call this the redundant case when the repetition code is of length 1. The word error probability in this case is p, the probability that a given bit will be received in error.

In the case when the repetition code is of length 3, as above, the word error can be calculated as follows. First, consider "000": which received messages will be closer to "000" than "111"? These are "000", "001", "010", "100". So the probability that "000" is received correctly can be calculated from the sum of the probabilities that each of these would be received:

Which, with some rearranging, means the probability that "000" is received correctly = (1-p)2 (1+2p)

Thus, the word error probability is 1 - (1-p)2 (1+2p) = 3p2 - 2p3

(The same value is reached when considering "111", by symmetry.)

Some predictions, with p=0.01:

Notice how the probability of receiving an error reduces as the amount of repetition increases; the more redundancy there is in the message, the easier it is to correct for noise.

Simulation

We can now write some code to implement this repetition code scheme, and try to reproduce the predicted word error rates. Of course, any language can be used here, but I'm using unicon.

Messages will be stored as lists of integers, 0 or 1 as appropriate. We begin with a procedure to calculate the weight of a given list of integers:

procedure weight (lst)
  total := 0
  every total +:= !lst
  return total
end

Oddly, I cannot find a standard way to compare two lists in unicon, so I added:

procedure equal_lists (lst1, lst2)
  if *lst1 === *lst2 then {
    every i := key(lst1) do {
      if lst1[i] ~=== lst2[i] then fail()
    }
  } else
    fail()

  return
end

The following two important procedures perform the actual encoding or decoding. The first produces the repetition codeword, and the second uses the weight to convert a codeword to its likely original value:

procedure encode (val, size)
  result := []

  every !size do {
    result := push(result, val)
  }

  return result
end

procedure decode (val, size)
  return if weight(val) <= size/2 then
    0
  else
    1
end

The following two procedures will encode or decode an entire message. These are probably not needed, given our simulation only tests individual codewords, but they complete the code.

The decoding of a message means dividing a perhaps long stream of binary digits into blocks of the same size as our repetition code. e.g. with a code of "111000000111111000", it must be divided into "111", "000", "000", "111", "111" and "000", and then each individual block decoded.

procedure encode_msg (msg, size)
  result := []

  every bit := !msg do {
    result := result ||| encode (bit, size)
  }

  return result
end

procedure decode_msg (msg, size)
  result := []

  every i:= 1 to *msg by size do {
    result := put (result, decode (msg[i+:size], size))
  }

  return result
end

The transmit procedure takes the message and randomly inverts each bit, using the given p:

procedure transmit (msg)
  p := 0.01 # symbol error probability

  every i := key(msg) do {
    if ?0 < p then msg[i] := 1-msg[i] # add noise
  }

  return msg
end

Finally, the main procedure runs the simulation. It loops through the three repetition sizes and transmits a random codeword as the message for N iterations, checking the accuracy of the decoded message. The procedure reports on the overall word error probability for each repetition size:

procedure main (arglist)
  N := 1000000 # number of iterations

  every size := ![1,3,5] do {
    correct := 0

    every !N do {
      msg := [ ?[0, 1] ]  # msg is a single bit
      received := decode_msg (transmit (encode_msg (msg, size)), size)
      if equal_lists(msg, received) then correct +:= 1
    }

    write ("Size           = ", size)
    write ("N              = ", N)
    write ("Number correct = ", correct)
    write ("Perr(C)        = ", 1-real(correct)/N)
    write ()
  }
end

Results are reasonably close, for 1 million iterations (comment added to show expected value):

$ unicon -C codes.icn
$ ./codes
Size           = 1
N              = 1000000
Number correct = 990027
Perr(C)        = 0.00997300000000001    # expected 0.01

Size           = 3
N              = 1000000
Number correct = 999710
Perr(C)        = 0.0002900000000000125  # expected 0.000298

Size           = 5
N              = 1000000
Number correct = 999992
Perr(C)        = 8.000000000008001e-06  # =0.000008 expected 0.000010

Page from Peter's Scrapbook, output from a VimWiki on 2024-01-29.