To use FreeDV with commercial radios we have developed a series of “rig adapters” such as the SM1000 and now ezDV. These are embedded devices that run “headless'”(no GUI) and connect between your SSB radio and a microphone/headset to allow it to run FreeDV.
Our latest prototype speech waveform is RADAE, which is showing promise of improved voice quality and robustness over our existing FreeDV modes and indeed SSB. RADAE uses machine learning (ML) and requires significantly more CPU and memory than existing FreeDV modes.
We would like to know if we can run RADAE on the ezDV, which is based around an ESP32-S3.
The RADAE “stack” consists of the RADAE encoder and decoder, and the FARGAN vocoder. The RADAE encoder and decoder requires around 80 MMAC/s (million multiply-accumulates per second) each, and 1 Mbyte of RAM for the weights. The FARGAN vocoder (used only on receive) requires 1 Mbytes of weights, and around 300 MMAC/s of CPU. The CPU is dominated by the FARGAN vododer, which runs on receive. As the weights are quantised to 8 bits the MMACs can be use 8 bit multiply accumulates, which suits many machines with 8 bit SIMD support.
In practice, you want plenty of overhead, so for a 300 MMACS/s algorithm a machine with above 3x this capability will make the port “easy” (e.g. a recompile with a little SIMD assembly language for the heavy lifting). It also allows you to tweak the algorithm, and run other code on the same machine without worrying about real time issues. If the CPU is struggling you will spent a great deal of time optimizing the code and the algorithm – time that could be better spent elsewhere.
ezDV is based on a ESP32-S3 CPU which has two cores that run at about 240 MHz, has 512 kbytes of local (fast) memory, and 8 MBytes of slower PSRAM that is accessed over a SPI bus. It does have hardware acceleration for integer multiply accumulates.
To answer our question, we developed a simple test program to characterize the ESP32. Many ML operations are “dot products”, or multiplying two vectors together. So we generated a 1Mbyte matrix in PSRAM, and performed a dot product with it one “row” at a time. The other input vector in the dot product was in fast internal memory. The idea was to exercise both the CPU and memory access performance in a way similar to RADAE, but without the hassle of porting the entire algorithm across.
Data type | SIMD? | Raw time (us) | MMACS |
---|---|---|---|
int8 | No | 34887 | 30 |
int8 | Yes | 8490 | 123 |
int16 | No | 47608 | 22 |
int16 | Yes | 16563 | 63 |
int16 | Yes (using ESP-DSP matrix multiply) | 16479 | 63 |
int32 | No | 42689 | 24 |
Data type | SIMD? | Raw time (us) | MMACS |
---|---|---|---|
int8 | No | 486 | 33 |
int8 | Yes | 84 | 195 |
int16 | No | 631 | 25 |
int16 | Yes | 98 | 167 |
int16 | Yes (using ESP-DSP matrix multiply) | 88 | 186 |
int32 | No | 419 | 39 |
Here is the source code for the program used to measure the ezDV performance.
As shown above, the performance of the matrix multiplication operation on the ESP32-S3 is highly dependent on the size of the matrices involved. For matrices that fit entirely within its internal RAM (either because it can fit within the internal RAM-backed PSRAM cache without many cache misses or because it was originally allocated entirely within internal RAM), performance is fairly reasonable for a micro-controller. In other applications, the ESP32-S3 is able to perform inference on smaller ML models with good performance.
Unfortunately, with larger matrices, the system becomes memory bandwidth limited extremely quickly. For instance, using int16 and ESP-DSP’s matrix multiplication function is slightly more performant than handwritten SIMD assembly when the dataset fits entirely in internal RAM, but are both limited to approximately the same MMACS when the system repeatedly has to go out to PSRAM. int8 using SIMD additionally performs 2x better than int16 because it has to access to PSRAM only half of the time.
These results suggest we will not be able to run the RADAE stack on ezDV. While unfortunate, is it useful to reach this conclusion early so we can consider alternatives for an adapter style implementation of RADAE.
We thought this characterization testing might be useful for others using the ESP32 for ML and other CPU-intense applications, so as part of our open source project philosophy, have written it up here to share.
This post was jointly written by Mooneer and David.