The downsampling experiment

As I mentioned in my previous post, my “optimizations” of the multi-mode receiver code caused a sudden loss in performance.This was a big surprise because what I did was to replace two filters with only one, which I would expected to yield a performance gain and certainly not a loss. What happened at the same time was that the sample rate in the demodulators went from 50 ksps to 250 ksps and it was the responsibility of the demodulators to downsample this to 50 ksps. I suspected that this might have cause the increased CPU load and I have set up a simple experiment to confirm it.

The setup is an as simple as possible flow graph that is representative for the receiver: A USRP source, a complex band pass filter, an AM demodulator and a spectrum scope. The difference between the two setups is where the 250 k → 50 k downsampling happens. In Method 1 we downsample already in the band pass filter, while in Method 2 we downsample in the AM demodulator.

 

Downsampling experiment: Method 1

 

Downsampling experiment: Method 2

As you can see on CPU load graph below, the difference between these two methods is very significant. For the computer the experiment was executed on this practically means that using Method 1 we could have gone up to 4 Msps USRP rate, while using Method 2 we could only have gone up to 1 Msps.

Experiment results

I will now go back to the GQRX receiver and modify to use Method 1.