###

Question:

I have a binary stream that has a very high error rate. The error rate is 50% meaning each bit has a 50% chance of being flipped. The error does not occur in bursts and is completely random, so Reedâ€"Solomon codes wouldn't work well.

Which scheme or algorithm should I apply to the stream? I don't care about the overhead at all.

This is all theoretical, so there's no point in asking if I could just reduce the error of the stream.

**EDIT**

Don't say its not possible, the very first answer it tells you it is possible with noisy channel coding theorem.

###

Solution:1

The noisy-channel coding theorem says you can actually achieve the Shannon capacity for the channel. It does **not** say the channel has nonzero capacity!

If you **randomize** 100% of the bits in the channel, 50% of them will be unchanged, so you only flip a random 50% of the bits. It should be obvious that you can't send any data over such a channel -- its Shannon capacity is zero.

###

Solution:2

If the error rate is 50%, then that's basically random noise isn't it? I mean, consider just trying to transmit a single bit. If you send an *infinite* stream of the right bit, with a 50% error rate you'll get half 1s and half 0s whether the right bit is 1 or 0.

If it's actually less than 50% (e.g. 50% of the bits will be "random" rather than "flipped") then you could just repeat the data - transmit each bit 128 times and work out which you get more of for each 100 bits received. That's the simple-to-code, hugely inefficient, not mathematical at all solution :)

###

Solution:3

Well, the whole point of Reed-Solomon error correction is that most real-world errors occur in bursts, so you interleave and de-interleave the data. If your errors are completely random, i.e. Poisson-distributed, then just adding redundancy to the stream in a straightforward, mathematically efficient way will work. One thing you could look at is some kind of hidden Markov model, such as trellis code. This is basically just a mathematically efficient way of adding redundancy.

Also, have a look at the noisy channel coding theorem. Strictly speaking, it doesn't apply to digital data, but if your source of these bits is some analog process, or if you could model your bits *as if* they were the result of some analog process, it could give you some insight into what the best you could do might be. This would prevent you from wasting time trying to do better than is mathematically possible.

###

Solution:4

As the channel approaches 50% real noise rate, it no longer becomes possible to transmit any information at all. To Jon Skeet's answer, if the error rate is anything less than 50% noise, then you can get data through by doing longer bursts of the intended data redundantly and statistically looking at the result to some level of confidence in the original value. The needed burst length and confidence levels for a given length would then be derived based on a characterization of the noise. Understand, however, what you are doing here is effectively lowering the data rate to improve the net Signal to Noise Ratio of the transmitted stream.

In your question, you might have ruled this out as an option, but a better encoding scheme might be based on the relative existence (or not) of the data stream itself. In other words, to transmit a binary one....send an alternating stream of 1/0. To send a zero, send nothing or perhaps send a constant level. The idea is that sending (and receiving) anything represents one state and sending (and receiving) nothing represents the other state. This would effectively resemble a type of bipolar encoding of the data.

###

Solution:5

If your error rate is 50% the bit stream IS random and bears NO CORRELATION to the original bit stream. It is like you are XORing the stream with a completely random bit stream, and the result is completely random. And there is nothing you can do about it.

The flip rate must be below 50% in order for any scheme to work. Of course, it could be ABOVE 50%, but then you can first invert the stream and then process it like if the error rate was below 50%.

If the errors are completely random and very frequent (e.g. 25% of the bits are flipped), it is very difficult to come up with a robust error detection scheme. You need to add a significant amount of redundancy.

###

Solution:6

Have you looked into turbo codes?

-- MarkusQ

Doh! I misread that as 50% randomized, not 50% flipped.

###

Solution:7

If *exactly* 50% of the bits are flipped in any given transmission, rather than each bit being flipped with 50% probability, you can send a bit of information by sending a transmission of two bits -- send a 0 as 00 and a 1 as 01. If the first bit of the received codeword is 1, then the other bit is unflipped.

**Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com**

EmoticonEmoticon