David Mar 2025 – ML EQ, 700D ASR, RADE Presentation, BBFM fading

Machine Learning Equalisation (ML EQ)

Continuing from last month I’ve had some success at using Machine Learning for equalisation (ML EQ), and have some simulation results that show improved performance at low SNRs with a narrow band (600 Hz RF bandwidth) waveform and no pilot symbols. The narrow bandwidth was suggested from experiments around training several different configurations, and seeing which ones performed best. I had to put some effort into developing an accurate model of the phase distortion due to the HF channel, timing and frequency offsets, and noise. I also developed a better time domain test framework to make comparisons as close as possible to a real world, practical waveform.

Here is a plot of model “loss” versus peak SNR (PNR) that compares RADE V1 (model19 check3) with the latest 10 carrier prototype. A low “loss” is good, with 0.3 being roughly the limit for usable speech communication. We are seeing 2-3dB gains at low SNRs over RADE V1, with intelligible speech down to -5 dB. The new waveform has a 600 Hz RF bandwidth (about 4 times the spectral efficiency of SSB), and a 0dB peak to average power ratio (PAPR) – which suggests it can pass through efficient non-linear power amplifiers (unusual for any analog or digital HF speech waveform).

In the plot above, compare AWGN (blue versus yellow) and fading (red versus purple). Here is the spectrum (please ignore the red lines for now):

Curiously, I am not sure if this is still some version of QAM over OFDM, or if we have “trained” another waveform that is less sensitive to phase, e.g. frequency shift keying (FSK). That spectrum looks a lot like 2FSK to me, and not much like the traditional “flat top” OFDM. The ML network can generate any set of complex valued symbols on an OFDM time-frequency grid so it could conceivably do QAM, M-FSK, ASK, insert it’s own pilot symbols or tones, or some time varying combination of the above!

This work is still some distance away from a practical waveform. For example I need to figure out how to do timing and coarse frequency offset estimation, and fine frequency tracking – all starting from a clean slate and without the usual classical DSP bag of tricks as there are no pilot symbols. As a risk minimisation exercise I am inclined to develop just enough of these missing algorithms so I can do a “stored file” over the air test sooner rather than later (same strategy as last year).

Documentation and FreeDV 700D ASR Testing

A lot of work went into RADE V1 in 2024 and it all happened quite quickly. It’s important to document the work so far as a resource for others, a springboard for further development, and as a way to promote FreeDV. This month I’ve been working steadily on two conference papers (one of the HF RADE work and one on BBFM).

As part of the documentation work I’ve added FreeDV 700D to the automatic speech recognition (ASR) testing I discussed in Dec 2024. The y-axis is the word error rate (WER), plotted against signal to noise ratio (SNR). A high WER means we can’t understand many of the received words, so a low WER is the goal. This plot is rather “busy”, but lets just focus on the solid lines (AWGN or non fading channel) for now. Green is the 700D word error rate.

You can see the sharp “knee” in the 700D curve around -2dB, this is where the FEC falls over, and is common for system that use FEC. In contrast the SSB (blue) and RADE (red) curves show a smoother change in performance with SNR – a more desirable outcome for a speech system. The best case/high SNR WER for 700D is pretty high compared to SSB and RADE – which is consistent with user feedback (some operators struggle with the speech quality of 700D). However experience has shown that trained operators can use 700D OK, just like trained SSB operators can communicate with low SNR SSB. There’s a window between about 4 and -1 dB SNR where 700D has a lower WER than SSB on AWGN (and presumably slow fading) channels.

Now lets look at the fast fading channel (MPP) performance, the dotted lines. As low SNRs (e.g. around 5dB SNR), 700D is within a few dB of SSB, but at high SNRs SSB has a lower WER. These results are consistent with our on air experiences.

At all SNRs, RADE V1 has a lower WER than 700D, and indeed SSB.

BBFM fading channel training

Also in this rather busy month I’ve progressed the base band FM (BBFM) project, which uses machine learning to send speech over FM radios commonly used for VHF/UHF land mobile radio (LMR) (see BBFM posts). Over the past few months we have worked up a new set of math expressions for the BBFM channel, based on a “back to basics” study of FM demodulation from some textbooks that are about the same age as us! I used our new expressions to train a ML model that includes simulated LMR fading in the training dataset.

It’s early days, but we’ve obtained remarkably good results when simulating the fading from a vehicle at 60 km/hr at a very low -120 dBm. This is the received signal level where most existing LMR radios start to fall over even on benign, non-faded AWGN channels. Like this time last year with the HF version or RADE I’m getting “too good to be true” vibes so have reached out to Tibor Bece to back up these results with some bench tests on real UHF radio hardware.

RADE Ham Club Presentation

Continuing with the documentation theme I worked up a slide deck for RADE; pitched at a technical level designed to be “Ham friendly” for presentation at clubs and Ham conferences. The first presentation I made using these slides is now on YouTube and is a useful resource if you are interested in RADE. See the RADE Ham Club Presentation post.

RADE Ham Club Presentation

RADE is very new technology, so I’ve been putting some effort into documenting and explaining it. At the AREG March 2025 meeting I delivered a presentation on RADE, which has been kindly recorded and placed on YouTube:

Here are the RADE Presentation slides in PDF format.

This talk is pitched at a “Ham friendly” technical level for clubs and Ham conferences. It includes a section explaining how machine learning is different from classical radio design, where we “train” an AM radio diode detector as an example. I hope this is a useful resource for others who wish to present on RADE at their radio clubs and conferences, and for those who just want to learn a little more about RADE.

David Feb 2025, Papers, ML EQ prototyping

RADE Documentation

One of the aims of this project is to document our work to a professional standard. This month I spent some time working on two research papers, one on the HF RADE work, and one on the baseband FM (BBFM) work. I’m also working on a presentation of RADE technology aimed at Hams, that I will present at my local AREG club in March.

The RADE work has been moving pretty fast over the past 12 months, so I’ve found writing up the work beneficial to help me collect my thoughts and prepare for further development. It’s very new technology, and a lot of people are curious about how RADE technology works. This all means it takes some time and effort to explain. Another good reason to document this work is to get it out of my head and into a form that others can work with in future (another one of our grant aims).

We hope to publish the papers later this year. By writing the papers we also hope to promote the project and help communicate our work at a professional level to commercial companies who may be interested in integrating RADE technology into their products.

ML Equalisation

RADE is a mixture of classical DSP and ML signal processing. One interesting design choice is how to partition the design – which chunks of signal processing should use old school DSP, and which ML?

For RADE V1 we use ML to generate QAM symbols, but classical DSP to “equalise” these symbols at the receiver. You can think of equalisation as removing any phase and frequency offsets in the received signal – a bit like fine tuning a SSB receiver. In regular data modems equalisation stops the received modem constellation from rotating or spinning, so the correct bits can be demodulated.

For RADE V2 I am prototyping the use of ML for the equalisation. The curves below show the performance of various schemes I have tested so far, with RADE V1 (blue) as the control. The “loss” is a measure of distortion, lower is better. You can see the loss decrease as SNR increases, just as we would expect.

Now in practice RADE V1 actual requires about 3dB more SNR once we add the classical DSP equalisation so for a fair comparison it should be shifted 3dB to the right, making all of the curves within a few dB of each other. So the ML network is indeed performing the equalisation function, but too early to say if we have something that can outperform the classical DSP approach used in RADE V1.

The yellow curve is intriguing – its suggests that with the right network we can get better speech quality than RADE V1 at high SNRs.

More work required to work through the equalisation question and it’s very much R&D rather than Engineering, which makes the timeline for RADE V2 hard to predict. More next month…..

David Jan 2025 – SNR estimation, Bandwidth, EQ, 2024 in review

At the start of this month I did battle with the problem of SNR estimation on the RADE V1 signal. As I have mentioned previously, this had some challenges due to the lack of structure in the RADE constellation. After a few false starts I managed to get something viable running using the properties of the pilot symbols. The plot below shows the estimated against actual SNR for a range of channels. In the -5 to 10dB range (of most interest to us) it’s within 1dB for all but the MPP (fast fading) channel where the reported estimate reads a few dB lower than the actual (Note Es/No roughly the same as SNR for this example).

I’ve started work on RADE V2, where we hope to use lessons learned from RADE V1 to make some improvements and develop a “stable” waveform for general Ham use. This month I have made some progress in jointly optimising the PAPR and bandwidth of the RADE signals. For regulatory purposes, the bandwidth of signals like OFDM are often specified in terms of the “occupied bandwidth” (OBW) that contains 99% of the power. The figure below shows the spectrum of a 1000 symbols/s signal with a 1235 Hz 99% occupied bandwidth OBW in red.


Machine Learning Equalisation

Also for RADE V2, I have been prototyping ML based equalisation, and have obtained good results for some examples using the BER of QPSK symbols as a metric. The plot below shows the BER against Eb/No for the classical DSP (blue), and two candidate ML equalisers (red and green, distinguished by different loss functions). The channel had random phase offsets for every frame, which the equaliser had to correct. The three equalisers have more or less the same performance.

These results show the equalisation function can be performed ML networks, with equivalent performance to classical DSP.

Project Management

Quite a bit of admin this month, including time spent recruiting prospective new PLT members, updating budgets, and our annual report. Not as much fun as playing with machine learning, but necessary to keep the project running smoothly.

It was time to write our annual report for the ARDC who have kindly funded this project for the last two years. Writing this report underlined what a good year we had in 2024, some highlights:

  • The development and Beta release of the Radio Autoencoder RADE V1 which is well on the way to meeting our goals of being competitive with SSB and high and low SNRs. Special thanks to Jean-Marc Valin for your mentoring and vision on this project!
  • The BBFM project, paving the way for high quality speech on VHF/UHF land module radio (LMR) applications, in collaboration with Tibor Bece and George Karan.
  • New data modes to support FreeDATA, in collaboration with Simon DJ2LS.
  • The release of ezDV and continued maintenance of freedv-gui largely by Mooneer’s efforts.
  • Peter Marks joining our Project Leadership Team. He’s already making a big impact – thanks Peter!

David Dec 2024 – Testing RADE with Automatic Speech Recognition

An important goal of our project is improved speech quality over SSB and both low and high SNRs. We have anecdotal reports of good performance of RADE compared to SSB, but need an objective, controlled way of comparing performance. For speech systems this generally means ITU-T P.800 or P.808 standards based subjective testing. However this is complex and requires skills, experience and resources not available to our team.

A few months ago Simon, DJ2LS suggested the use of Automatic Speech Recognition (ASR). More recently, when discussing the issue of subjective testing, Jean Marc Valin also suggested ASR and provided suggestions for a practical test system. So I spent much of December building up a framework for ASR tests.

The general idea is to take a dataset of speech samples, pass them through simulations of SSB and RADE over HF radio channels, then use a ASR engine to detect the words in the received speech. A post processing system then compares the detected words to the original words and determines the Word Error rate (WER) as a performance metric. Our work uses the Librispeech dataset, and the Whisper ASR system.

These sentences are complex English sentences, spoken quickly with no contextual cues. I have trouble understanding many of them on the first listen. This is a much tougher test than the typical low SNR Amateur Radio contact where someone shouts their callsign 5 times then reports “5 by 9”. For example, here is one sample from the Librispeech dataset processed with SSB/RADE/original (listen to the original last); SSB and RADE were at about 6dB SNR on a MPP (fading) channel.

The plot below show some initial results over 500 sentences. The x-axis is receiver SNR measured in a 3kHz noise bandwidth. The y-axis is the word error rate WER). Green is RADE, and blue SSB. The solid lines are for a AWGN channel, the dashed lines the multi-path poor (MPP) fading channel. The dots (placed arbitrarily on the x-axis) in the lower right are controls, e.g. the FARGAN synthesizer used by RADE with no encoding, 4kHz band limited speech, and the original, clean speech.


A low word error rate (WER), say 5%, would correspond to an effortless “armchair copy”; a 30% WER could be the limits of practical voice communication (1 in 3 words can’t be understood). The distance between the RADE and SSB curves shows the benefits of RADE, at least using this test.

For example, if you draw a line across the 10% WER level, RADE achieves this (dashed MPP curves) at 3dB, SSB at 12dB. The x-axis doesn’t include the PAPR advantage of RADE, which is roughly an additional 5dB when using a transmitter with the same peak power output (depending on how hard the SSB is compressed).

Also this month I have been working on SNR measurement of received RADE signals. This is quite challenging, due to the lack of structure in the ML-generated RADE constellation. At present I’m attempting to use a classical DSP approach using the pilots symbols. This will be the last feature we will add to RADE V1, as we’d like to use the lessons learned to start designing RADE V2.

David Nov 2024 – BBFM OTA tests & C Port & EOO Text

This month I conducted a successful test of the Baseband FM (BBFM) waveform, over a short UHF radio link on my bench. This demonstrates high quality, 8000 Hz audio bandwidth speech being transmitted over the air (OTA) using commodity FM UHF radios and machine learning. It’s early days, but the speech quality already appears competitive with analog FM and any VHF/UHF digital voice system I am aware of.

Here is a sample of the “ideal” BBFM audio ( a perfect channel), and the audio through the UHF radio link. The initial word “G” is missing due a sync issue that will be cleaned up soon.

The experimental system was a Yaesu FT-817 with a Rigblaster USB sound interface as the transmitter into a dummy load, and a Yaesu VX3 handheld with a USB dongle sound card as the receiver. I used the Python “passband” modem developed last month so the signal could be sent over the regular 300-3000 Hz audio bandwidth that commodity FM radios provide (i.e. no DC coupling to the FM modulator or special mods).

To test the modem I can send BPSK symbols instead of ML symbols – in this case I could see a bit of distortion on the scatter diagram. However when I plug the ML symbols back in the audio sounds just fine, indicating the system is quite robust to noise as expected. It’s early days so I haven’t set the deviation carefully or fine tuned the system, but this is a fine start.

C Port of Core ML

The next chunk of work from November was a C port of the Python core encoder and decoder at the heart of the RADE system. Fortunately, this is very close to RDOVAE that is now part of Opus, so much of the Opus ML code could be re-used, with the main change being a new set of weights. The C port resulted in a significant drop in CPU load, in fact it’s now hard to measure on my laptop.

Profiling suggests the remaining receiver Python DSP now dominates the CPU load. However I am reluctant to port this to C as (a) it’s complicated so this would take months and (b) I have some improvements planned for RADE V2 which, if successful, will make much of this DSP unnecessary.

End of Over Text

Unlike earlier FreeDV modes RADE V1 does not, at present, have a way of sending small amounts of text over the channel (along side the voice). This is particularly useful for “spotting” RADE signals, e.g. on FreeDV reporter and PSK reporter. We have plans for a text system in RADE V2. but this is several months away. As an interim solution for RADE V1, we are building up a text system that uses the currently empty “End of Over” frame to send digital data. Turns out we have room for 180 bits available there. So every time an over ends, a chunk of text can be sent by the system. I have developed the modem DSP side of this, and it seems to work OK on simulated fading channels at moderate SNRs (e.g. 6dB on fading channels).

Conference Paper

Finally, I have been working on a conference paper on the HF RADE system. This is new technology for HF speech communications, and combines several disparate technologies, e.g. machine learning, speech coding, OFDM, and HF radio. So I am putting in some effort to document and publish the work in a paper, hopefully at a conference (TBD) in 2025.

David’s Oct 2024 Update – Real Time RADE & BBFM

The working acronym for the Radio Autoencoder has been changed from RADAE to the more use-friendly RADE (pronounced”raid”).

This month I continued working on getting RADE V1 into a form where it can be used in real time. The hybrid Python/C model seems to be working out quite well, meeting our goal of enabling hams to use the waveform early, and allowing us to spot any significant bugs that we may have missed with the stored file test campaign. It also makes development really fast and efficient compared to coding in C.

To support the RADE release I wrote a RADE Introduction document, that covers the basics of the mode and a description of the waveform. Thank you to the test team for providing feedback on this document, and Yuichi, JH0VEQ, for a Japanese translation.

Initial reports of on air performance are encouraging, and match our experience from the stored file test campaign. This represents a significant step towards our goals for our ADRC funded project:

  • Improve speech quality to a level comparable to commercial codecs.
  • Develop a “rag chew” FreeDV mode with subjective speech quality comparable to SSB at high SNRs.
  • Improve low SNR operation such that FreeDV is superior to SSB over poor HF channels.

We are making good progress on all three goals, although it would be useful to perform some formal subjective tests to support the anecdotal reports. There is some work we could do to improve the usability of real world RADE, e.g. reduce PTT turn around delays, improved acquisition, and integration into SDRs.

RADE V1 is an interim step, and we need to keep moving forward. While a neat way to get the mode “on air” quickly – the hybrid Python model is not the end goal, nor is it intended for widespread packaging and distribution. Next step will be a C port of the core RADE encoder/decoder, which will significantly lower the CPU load and bring us one step closer to a more general purpose C library version of RADE, suitable for distribution and integration into SDRs.

The Baseband FM (BBFM) work (see demos from last month) is also moving along nicely. This project is aimed at high quality speech over VHF/UHF radio. This month I have been developing a single carrier PSK modem that can be used over DC coupled or bandpass filtered FM radio channels. This will support an on air experiments of high quality speech using off the shelf FM radios and handsets.

David’s FreeDV update – September 2024

From mid-August to mid-September, we conducted a Radio Autoencoder (RADAE) test campaign in two phases (a) stored files and (b) a prototype real time system. Ten people joined our test group, with many submitting stored file and real time test results. In particular I would like to thank Mooneer K6AQ, Walter K5WH, Rick W7YC, Yuichi JH0VEQ, Lee BX4ACP, and Simon DJ2LS for posting many useful samples, and for collecting samples of voices other than their own to test.

We are quite pleased with the results, here is a summary:

  1. It works well with most speakers, with the exception of one voice tested.  We will look into that issue over the next few months.
  2. Some of the samples suggest acquisition issues on certain very long distance channels, but this issue seems to be an outlier, perhaps an area for further work.
  3. RADAE works well on high and low SNR channels.  In both cases the speech quality is competitive with SSB.
  4. It works on local (groundwave), NVIS, and International DX channels. It works well for (most) males and females, across several languages.
  5. Prototype real time/PTT tests suggest it also works well for real time QSOs, no additional problems were encountered compared to the stored files tests. Mooneer will tell you more about that in his September report!

Selected Samples

I estimate we collected around 50 samples, here are just a few that I have selected as good examples. I apologise that I don’t have room to present samples from all our testers, however your work is much appreciated and has contributed greatly to moving this project forward.

Our stored file test system sent SSB and RADAE versions immediately after each other, so the channel is approximately the same. Both SSB and RADAE have the same peak power, and the SSB is compressed to around 6dB Peak to Average Power Ratio (PAPR). In each audio sample below, SSB is presented first.

Here is a sample of Joey K5CJ, provided by Rick W7YC. The path is over 13,680km, from Texas, USA to New South Wales, Australia (VK2), on just 25W. Measured SNR was 4dB. Note the fading in the spectrogram, you can hear RADAE lose sync then recover through the fade.

Using another sample of Joey, K5CJ (also at 25W), Rick has provided a novel way to compare the samples:

He writes:

RADAE is in the (R) channel & analog SSB is in the (L) left channel. Listen using stereo speakers, and slide the balance control L-R to hear the impact. Or, listen to it on your smart phone & alternately remove the L & R earbuds – wow. It demonstrates how very well RADAE does over a 13,680 km path!

Here is Lee, BX4ACP, sending signals from Taiwan to Thailand in a mixture of English and Chinese using 100W. The measured SNR was 5dB, and frequency selective “barber pole” fading can be seen on the spectrogram.

Here is Yuriko (XYL of Yuichi JH0VEQ) using 100W over a 846 km path from Niigata Prefecture to Oita Prefecture in Japan. The reported SNR was just 2dB. From the spectrogram of the RADAE signal, the channel looks quite benign with no obvious fading. However I note the chirp at the start has a few “pieces missing”, which suggests the reported SNR was lower than the SNR experienced by the RADAE signal a few seconds later.

Next Steps for HF RADAE

Encouraged by these results, the FreeDV Project Leadership Team (PLT) has decided to press on with the real time implementation of RADAE, and integration into freedv-gui, so that any ham with a laptop and rig interface can enjoy the mode. This work will take a little time, and involves porting (or linking) some of the Python code to C. Once again, we’ll start with a small test team to get the teething problems worked out before making a general release.

ML Applied to Baseband FM

To date the Radio Autoencoder has been applied to the HF radio channel and OFDM radio architectures. We have obtained impressive results when compared to classical DSP (vocoders + FEC + OFDM modems) and analog (SSB).

A common radio architecture for Land Mobile Radio (LMR) at VHF and UHF is the baseband FM (BBFM) radio, which is used for analog FM, M17, DMR, P25, DStar, C4FM etc. For the digital modes, the bits are converted to baseband pulses (often multi-level) that are fed into an analog FM modulator, passed through the radio channel, and converted back into a sequence of pulses by an analog FM demodulator. Channel impairments include AWGN noise and Rayleigh fading due to vehicle movement. Unlike, HF, low SNR operation is not a major requirement, instead voice quality, spectral occupancy (channel spacing), flat fading, and the use of a patent free vocoder are key concerns.

We have been designing a hybrid machine learning (ML) and DSP system to send high quality voice over the BBFM channel. This is not intended to be a new protocol like those listed above, rather a set of open source building blocks (equivalent to vocoder, modulation and channel coding) that could be employed in a next generation LMR protocol.

It’s early days, but here are some samples from our simulated BBFM system, with an analog FM simulation for comparison.

Original

BBFM, CNR=20dB

BBFM, CNR=20dB, Rayleigh Fading at 60 km/hr

Analog FM, CNR=20dB

CNR=20dB is equivalent to a Rx level of -107dBm (many LMR contacts operate somewhat above that). The analog FM sample has a 300-3100Hz audio bandwidth, 5kHz deviation, and some Hilbert compression. For the BBFM system we use a pulse train at 2000 symbols/s, that has been trained using a simulation of the BBFM channel. As with HF RADAE, the symbols tend to cluster at +/-1, but are continuously valued. Compared to the HF work, we have ample link margin, which can be traded off for spectral occupancy (channel spacing and adjacent channel interference).

This work is moving quite quickly, so more next month!

David’s FreeDV Update – August 2024

Many digital voice systems have the ability to send small amounts of digital data in parallel with the compressed voice. For example in FreeDV we allocate a few bits/frame for call sign and grid square (location) information. This is a bit complex with RADAE, as we don’t actually send any “bits” over the system – it’s all analog PSK symbols.

So I’ve work out a way to inject 25 bits/s of data into the ML network along side the vocoder features. The ML magic spreads these bits across OFDM carriers and appears to do some sort of error protection, as I note the BER is quite low and it show some robustness to multipath. I can tune the bit error rate (BER) by adjusting the loss function and bit rate; a few percent BER at low SNRs (where the voice link falls over) is typical.

The plot below shows the “loss” (RMS error) of the vocoder features as a function of SNR (Energy per symbol/noise density). The vertical axis is the mean square error of the vocoder features through the system – lower is better. It’s useful for comparing networks.

So “red” is model17, which is our control with no auxiliary data. Yellow was my first attempt at injecting data, and purple the final version. You can see purple and red are almost on top of each other, which suggests the vocoder speech quality has barely changed, despite the injection of the data. Something for nothing? Or perhaps this suggests the data bits consume a small amount of power compared the vocoder features.

Much of this month was spent preparing for the August test campaign. I performed a dry run of some over the air (OTA) tests, leading to many tweaks and bug fixes. As usual, I spent a lot of time on making acquisition reliable. Sigh.

The automated tests (ctests) were invaluable, as they show up any effects of tuning one parameter on other system functions. They also let me test in simulation, rather than finding obscure problems through unrepeatable OTA tests. The loss function is a very useful measure for trapping subtle issues. A useful objective measure of speech quality is something I have been missing in many years of speech coding development. It’s sensitive to small errors, and saves a lot of time with listening tests.

I have developed a test procedure for the stored file phase of the August 2024 test campaign. The first phase of testing uses stored files (just like the April test campaign) but this time using the new PAPR optimised waveform and with a chirp header that lets us measure SNR. To make preparation and processing easier, I have developed a web based system for processing the Tx and Rx samples. This means the test team can decode RADAE samples by themselves, without using the command line Linux tools. A test team of about 10 people has been assembled and a few of them have already posted some interesting samples (thanks Yuichi, Simon, and Mooneer).

If you would like to actively participate in RADAE testing, please see this post.

The next phase of testing is real time PTT. The Python code runs in real time, so I have cobbled together a bash script based system (ptt_test.sh) – think of it as crude command line version of freedv-gui. It works OK for me – I can transmit in real time using my IC-7200 to KiwiSDRs, and receive off air from the IC-7200. By using loop back sound devices I can also receive from a KiwSDR. The script only runs on Linux and requires some knowledge of sound cards, but if I can find a few Linux-savvy testers we can use ptt_test.sh to obtain valuable early on-air experience with RADAE. This is an opportunity for someone to make the first live RADAE QSO.

An interesting side project was working with Mooneer to establish the feasibility of running RADAE on ezDV. Unfortunately, this looks unlikely. Modern machine learning systems really require a bit more CPU (like a 1GHz multi-core machine). Fortunately, this sort of CPU is pretty common now (e.g. a Raspberry Pi or cell phone). Once RADAE matures, we will need to reconsider our options for a “headless” adapter type platform.

Radio Auto Encoder Test Team

We are ready to start another test campaign for the radio autoencoder (RADAE). This will consist of stored file tests (like the April campaign), and some real time PTT testing. The draft test procedure is here.

If you would like to join the team testing RADAE, please reach out to us directly or via the comments below.