Machine Learning Equalisation (ML EQ)
Continuing from last month I’ve had some success at using Machine Learning for equalisation (ML EQ), and have some simulation results that show improved performance at low SNRs with a narrow band (600 Hz RF bandwidth) waveform and no pilot symbols. The narrow bandwidth was suggested from experiments around training several different configurations, and seeing which ones performed best. I had to put some effort into developing an accurate model of the phase distortion due to the HF channel, timing and frequency offsets, and noise. I also developed a better time domain test framework to make comparisons as close as possible to a real world, practical waveform.
Here is a plot of model “loss” versus peak SNR (PNR) that compares RADE V1 (model19 check3) with the latest 10 carrier prototype. A low “loss” is good, with 0.3 being roughly the limit for usable speech communication. We are seeing 2-3dB gains at low SNRs over RADE V1, with intelligible speech down to -5 dB. The new waveform has a 600 Hz RF bandwidth (about 4 times the spectral efficiency of SSB), and a 0dB peak to average power ratio (PAPR) – which suggests it can pass through efficient non-linear power amplifiers (unusual for any analog or digital HF speech waveform).

In the plot above, compare AWGN (blue versus yellow) and fading (red versus purple). Here is the spectrum (please ignore the red lines for now):

Curiously, I am not sure if this is still some version of QAM over OFDM, or if we have “trained” another waveform that is less sensitive to phase, e.g. frequency shift keying (FSK). That spectrum looks a lot like 2FSK to me, and not much like the traditional “flat top” OFDM. The ML network can generate any set of complex valued symbols on an OFDM time-frequency grid so it could conceivably do QAM, M-FSK, ASK, insert it’s own pilot symbols or tones, or some time varying combination of the above!
This work is still some distance away from a practical waveform. For example I need to figure out how to do timing and coarse frequency offset estimation, and fine frequency tracking – all starting from a clean slate and without the usual classical DSP bag of tricks as there are no pilot symbols. As a risk minimisation exercise I am inclined to develop just enough of these missing algorithms so I can do a “stored file” over the air test sooner rather than later (same strategy as last year).
Documentation and FreeDV 700D ASR Testing
A lot of work went into RADE V1 in 2024 and it all happened quite quickly. It’s important to document the work so far as a resource for others, a springboard for further development, and as a way to promote FreeDV. This month I’ve been working steadily on two conference papers (one of the HF RADE work and one on BBFM).
As part of the documentation work I’ve added FreeDV 700D to the automatic speech recognition (ASR) testing I discussed in Dec 2024. The y-axis is the word error rate (WER), plotted against signal to noise ratio (SNR). A high WER means we can’t understand many of the received words, so a low WER is the goal. This plot is rather “busy”, but lets just focus on the solid lines (AWGN or non fading channel) for now. Green is the 700D word error rate.

You can see the sharp “knee” in the 700D curve around -2dB, this is where the FEC falls over, and is common for system that use FEC. In contrast the SSB (blue) and RADE (red) curves show a smoother change in performance with SNR – a more desirable outcome for a speech system. The best case/high SNR WER for 700D is pretty high compared to SSB and RADE – which is consistent with user feedback (some operators struggle with the speech quality of 700D). However experience has shown that trained operators can use 700D OK, just like trained SSB operators can communicate with low SNR SSB. There’s a window between about 4 and -1 dB SNR where 700D has a lower WER than SSB on AWGN (and presumably slow fading) channels.
Now lets look at the fast fading channel (MPP) performance, the dotted lines. As low SNRs (e.g. around 5dB SNR), 700D is within a few dB of SSB, but at high SNRs SSB has a lower WER. These results are consistent with our on air experiences.
At all SNRs, RADE V1 has a lower WER than 700D, and indeed SSB.
BBFM fading channel training
Also in this rather busy month I’ve progressed the base band FM (BBFM) project, which uses machine learning to send speech over FM radios commonly used for VHF/UHF land mobile radio (LMR) (see BBFM posts). Over the past few months we have worked up a new set of math expressions for the BBFM channel, based on a “back to basics” study of FM demodulation from some textbooks that are about the same age as us! I used our new expressions to train a ML model that includes simulated LMR fading in the training dataset.
It’s early days, but we’ve obtained remarkably good results when simulating the fading from a vehicle at 60 km/hr at a very low -120 dBm. This is the received signal level where most existing LMR radios start to fall over even on benign, non-faded AWGN channels. Like this time last year with the HF version or RADE I’m getting “too good to be true” vibes so have reached out to Tibor Bece to back up these results with some bench tests on real UHF radio hardware.
RADE Ham Club Presentation
Continuing with the documentation theme I worked up a slide deck for RADE; pitched at a technical level designed to be “Ham friendly” for presentation at clubs and Ham conferences. The first presentation I made using these slides is now on YouTube and is a useful resource if you are interested in RADE. See the RADE Ham Club Presentation post.