A quick introduction to soft (fuzzy) logic – part I

You can’t have your cake and eat it, a well-known proverb says. The state of this proverbial cake is strictly determined – it exists in exactly one of two possible states. This is very similar to the rules of Boolean logic – there are two (bi-nary) significantly different values for a logic sentence – true or false, or 1 and 0.

Real-world problems, including measurements, might not always have a strictly defined logical value. Imagine a blue square. The statement “the square is blue” is obviously true, whereas “the square is green” is undeniably false. But what if the square was turquoise, having a shade that’s a mix of both colors? What the logical value of “the square is blue” would be then? False? Or maybe something in between? This is exactly where Boolean logic fails, but soft logic offers a solution.

We can think of the soft logic as an extension to the natural, two-valued logic. Instead of limiting the possible values to a simple set of {0, 1}, we can use a range of values in between, the whole <0, 1>. Of course, the values of 0 and 1 still mean false and true. All the magic happens for other values. This gives soft logic more of a probabilistic meaning.

How does this apply to M17, one may ask. Decoding M17 baseband (4FSK) relies on accurate symbol to dibits mapping (called symbol slicing). This can be done in two different ways:

  • hard slicing
  • soft slicing

Hard slicing is a simple, straightforward method. For n possible symbol levels, the range of expected values is divided into n bands. Then, the symbol values are binned, depending on which band they are in. For 2FSK n is 2, and this process simplifies to basically determining the sign of the value (with zero usually treated as a positive value – we are not mathematicians). As a result, we get a series of ones and zeros (or pairs, or even triplets, for higher values of n).

Hard slicing principle for a 4FSK symbol stream – simple binning gets the job done.
In this case, the symbol sequence is determined to be +3, +1, +3, -3, -3, -1, +3, -1, -1, +1, +3.
This corresponds to a bitstream of 0100011111(…).

Can you imagine what would happen if the signal wasn’t so clean – having much worse SNR? True – some of the symbols could have been binned improperly, causing errors.

Soft slicing relies on probability. There is no more naїve binning. Now, all the logic values (or bits) can be anything between 0 and 1. Using fixed point arithmetic is very useful here, as dealing with floats is not always an option and operating on 16, or even 32-bit values, is just fast.

Soft slicing turns symbols into probabilities (likelihood).
Since we deal with dibits here (each symbol carries two bits of information), each symbol decodes to a pair of values (red and green piecewise-linear functions).
The result is (0.0, 1.0), (0.0, 0.0), (0.0, 1.0), (1.0, 1.0), etc.

The soft slicing method is only useful when soft decoders are used, otherwise it brings no advantages for the system.

Both examples use the same sequence. There is no noise present in any of them. We will add some next time to spice things up a bit. That’s when all the fun begins.

To be continued.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *