Applying filtering to the output signal

In the recent discussion about testing phono stages I realized that the riaa filter weighting is applied to the QA40x input waveform rather than the sig gen output waveform as is done in the real world. Feeding a phono stage a signal with constant amplitude introduces an unrealistic 40dB range of leaving the signal at 20Hz 100X larger than that of 20kHz .

Is there a way to add the filtering to the output chirp tone (before the DUT) rather than at the QA40x input (after the DUT)?


When I test a phono stage, I use a level of -46dBv (5mv) for MM. You want it to be a constant level so you can see how well the phono stage adheres to the RIAA equalization slope, which is easy to do if you add an RIAA weighting with your QA40x. The device below may be of some interest to you for some of your testing:

for a 5mV input signal @ 1Khz the input @ 20hz should be 0.5mV and 20kHz should be 50mV. Applying the riaa filter after the fact doesn’t help this. I currently use a reverse riaa filter but the problem with those is accuracy and consistency The left and right channels of mine have ±0.2dB variations from each other at some points in the filter and that makes getting accuracy a moving target. Being able to use a data based filter takes one source of error out of the equation but it has to happen at the source.

1 Like

With all due respect, we are talking about vinyl so I would not care about +/- .2dB unless it is more of a personal challenge to achieve greater accuracy, or I suppose if you were designing something to sell it, would look nice. Also, at 1kHz the RIAA curve has 0dB gain or loss.

1 Like

Starting with intrinsic error that can be avoided is simply bad practice. It doesn’t take too many Oh it is 0.2dB’s in a signal / measurement chain to add up to something substantial. One of the reasons I got the QA40x was to eliminate a possible source of error only to realize that it may introduce another source of measurement error. Plotting distortion vs. frequency for example becomes meaningless when the filter is applied after the fact.


Hi @dave, there’s not a way to add the weighting to the output prior to the data hitting the dac. But you should be able to reduce your errors with a combination of absolute signal level and FFT sizes.

Take a look at loopback frequency response normalized to 0 dB with FFT size of 64k and -60 dBv level (note that shorter FFT means less time spent at each frequency, and more noise).

Let’s look again at the -60 dBV level with 1/48th octave smoothing (see the FR context menu). This doesn’t help much at the low end, but it cleans up the high end a lot.

And then take a look at the same with the level of -40 dBv (no smoothing):

That’s really clean! And the uncertainty is reduced quite a bit even without smoothing. If you run it 5 times and put them all on a single graph, you can see it’s pretty tight:

OK, so the key is to ensure you are hitting the ADC with at least -40 dBV or so. If your pre-amp for some reason cannot handle a higher input, then I think a simple amp can help here. Here’s a -60 dBV sweep into a 30 dB low-noise amp with 1/48th octave smoothing:

So, the summary is to make sure your FFT is large enough (64K or so), and then try and keep the level into the ADC better than -40 dBV. And if you can’t do that, then try a little smoothing and/or run the DUT into a fixed gain amp and then run that into the analyzer. And of course, make sure you are on the most sensitive setting on the analyzer.

PS. The FR measurement is a very sensitive complex division operation, and so very small changes in input (due to noise) will manifest as sizable changes in apparent gain.

1 Like

Hey Matt,

I have no problem doing the measurements at individual frequencies. The problem is for each test the sig gen frequency and amplitude must be adjusted in accordance with the riaa to get a level playing field at the output of the DUT. Simply applying a weighting after the fact only gives realistic signal levels over a small range. It is also unclear to me how the input attenuator comes into play here. It has a 42dB range and the pre weighting signal also has a 40dB range. This suggests to me that the input atten should be set to 0dB and the input signal level @ 20Hz adjusted for best performance on the QA40x. Then by the time you get to 20kHz the input attenuator should be set to -36 or -42dB to attempt to keep the measurement playing field level. It is also important that the input levlel @ 1kHZ is representative of what the DUT is expected to see.

My goal here is simply quick accurate measurements of a phono stage. I am beginning to see that the only option for this is to use an external iRiaa filter and simply measure it compared to an ideal weighted filter and then create a new correction file taking into account any variance between the two. This could be done for both the L & R ‘prefilter’ and to correct to flat frequency response of both channels As long as everything remains stable over time, a quick accurate frequency sweep of a phono could be done to 90kHz.


Hi @Dave, so are you using discrete tones to test the amp?

Below is a sweep (using Frequency Response button) of a Pyle PP444 phono preamp. I then normalized the response at 1 kHz to be 0 dBr. This is with -60 dBV input level, and there is no smoothing. The curve is roughly as expected. This measurement takes about 1 minute to get the cables set up, and then it makes a full sweep about twice a second.

Next, I can apply the RIAA Playback Response weighting file and I get the following. Note that if the preamp were perfect, this would be a flat line:

So, this tells us the PP444 is lacking several dB at the low end, and about 1 dB hot at the high end.

If I switch to +/- 1 dB scales, I get the following:

Here we can see the noise is giving us about 1 dB of uncertainty at the upper end of things. You can either eyeball it, apply a bit more signal, or apply a bit of smoothing, or use the external (all outlines above) to help with the noise. With 1/48th octave smoothing, we get:

So, in short, determining the correctness of of an RIAA phono pre should be a very quick job. Literally just a few minutes! Please let me know what you are seeing

The way you illustrate is how I have played with it. Using an ideal mathematical riaa model was really appealing to me and it wasn’t until I realized that the filtering happens after the DUT that things got interesting for me. Essentially my problem is the 40dB range of input signal variation from a typical record vs. the flat output signal that the QA40x provides. Some will say it shouldn’t matter but starting with that kind of known error in what is expected to be a precision setup just seems like bad practice to me.

Hi @Dave, the DUT will be linear up to a certain point, and inside that region you can hit it at whatever level you wish and it won’t matter: there will be no amplitude error.

I run the plug-in “Amp Gain And Distortion vs Amplitude” with a 100 Hz test tone stepped from -80 to -30 dBV on the Pyle PP444 phono pre. The plot is below. You can see that all the way up to -38 dBV input, the amp has perfect gain linearity at that frequency. You can treat it however you wish and there won’t be any gain errors because you are in the linear region. In other words, as long as you stay in this linear region, you don’t have anything to worry about. But, you do have to strike the balance between the risk of the onset of clipping AND not hitting the QA403 with enough amplitude so that the measurement is clean.

in short, I don’t think you have to worry about the any error creeping in if you are operating the preamp outside the regions it might be exhibiting gain compression OR where noise begins to dominate.

So you have a 42dB range of linearity that you need to hit with a signal that has a 40dB range… not much room for error when you consider that the linearity plot will be a moving target wrt frequency. Running the same test at 20Hz and 20kHz should point out my issue. Now add to this the idea that I am looking at phono circuits that use devices where linearity varies with various combinations of applied amplitude and frequency. These situations all involve inductors/transformers at low signal levels like those found in MC step up transformers and LCR riaa correction circuits and adding 40dB range of amplitude on top of a musical content dynamic range of 70+dB makes the window of acceptable use rather tiny.

Hi @Dave, there’s much more than 42 dB of range. That is just what is shown on the particular plot. The analyzer itself has probably 142 dB of range. Let me see if the SPKR compression plugin can be ported quickly. That will let you subject the DUT to a chirp with an increasing level and should reveal if there’s anything odd going on at the different levels.

But you can do that manually too: Just run a chirp and different levels, and then plot them all on the same graph. In the plot below, this is with output levels of -45, -50, -60, -70, -80 and -90 dBV running into the PP444 phono pre.

You can see from the plots they are all basically the same shape until you get to -90 dBV. That is probably the noise floor of the preamp that is causing the divergence. You’ll also notice hum components getting larger.

But you should be able to do this type of plot on your pre-amp and convince yourself of an output level that has plenty of room. And then use that to characterize the preamp. And as noted early on in the post, if you absolutely positively wanted to reliably dig way, way down, then use smoothing or, if that had too much uncertainty, then use a 30 dB low-noise amp. But even as is, the QA403 can likely characterize your preamp deep into the noise of the preamp.

From the plot above, the green trace (-50 dBV) and the red trace (-60 dBV) look really safe as a place to sweep and ensure you still squarely in the linear region of the preamp.

The -60dBV is 1mV which is indeed a good place to test and I have no doubt the Q40x can give the needed resolution so the linearity of the test setup has never been my concern. What I am trying to look at is the linearity and distortion characteristics of the DUT with applied signal. The whole point of doing the test is to check the real world behavior and substituting a signal that is +20dB @ 20hz and -20dB @ 20kHz from a typical 1kHZ reference complicates what should be a simple task. Assuming the results of a linearity test are correct because the device you are trying to test is linear is a circular argument. Going back to inserting a passive iRiaa filter has its own set of problems when you consider the 40dB loss you now have 80mV as a maximum drive signal which is limiting for distortion testing at high frequencies (another area of interest for me.)

Assuming the results of a linearity test are correct because the device you are trying to test is linear is a circular argument.

Hi @Dave, not sure I agree. Gain compression is a proxy for distortion. If you want, you can sweep distortion at the various levels and when you see the 2H and 3H rise, you are very close to the compression point. So, whether you want to sweep gain or sweep distortion, you are still looking for the same thing.

Can you show a measurement you have made that you don’t trust?

It is not that I don’t trust the measurement. It is that the measurement methodology is inherently flawed because it presents the DUT an input signal that it will never see in practice.

This is why I started this topic specifically asking about the possibility of a “pre-filtered” output tone from the QA40x to avoid the hassles of coming up with a work around. Ultimately I am looking for a single test setup that will allow me to easily sweep for frequency with high accuracy and I also want to be able to sweep distortion against frequency for varying amplitudes to assess overload characteristics wrt frequency. Adding a correction file to a passive filter at the input is a good start but then the limited output available to overdrive the DUT will require an additional device for gain and then you will be testing the distortion spectra of the extra gain stage in addition to the DUT. A pre filtered output signal would sure make things easy and allow for a proper array of tests to quickly and easily be done.


Hi @Dave, use the FR sweep to ensure you have the filter shape correct. And then use tones, at the level you want for each band, to ensure the THD is where you need it to be. Just run a series of low frequency tones, a series of mid, and series of high. And each series use whatever level you wish for each frequency.

But I just don’t see where pre-distorting the sweep is going to reveal anything new. I’m happy to be convinced if you have some data, however.

The compression plot I mentioned will be ready for the next release. Below, I used it to sweep the PP444 phono pre-amp (starting at -90 dBV output) and you can see there is tons of margin on either side at -55 dBV. And you can clearly see the onset of clipping in the low-end. But with a -55 dBV stimulus, there’s nothing to worry about in terms of errors or uncertainty on this particular preamp.

I am trying to avoid the use of discrete tones and ultimately want the playing field level so to speak so the input tones and resultant overload behavior is representative of the signal that is fed to the input of the phono. It was Chicago’s post that alerted me to this issue. His results as posted in this image gave me cause for concern on several levels.

According to these plots if I am interpreting them correctly the lowest signals input to the DUT have the most distortion and as signal level goes up (aside from the obvious clipping) the distortion goes down. This is 100% the opposite of what I would expect and I am trying to reconcile the cause of this behavior. (I’m guessing noise?)

Screen Shot 2022-02-24 at 2.17.45 PM

I would simply like to recreate this set of plots with a “riaa corrected” input to the dut and see the results. Moving to using an iRiaa is of course an option but then as mentioned above I run out of signal to fully clip the stage. I am specifically trying to look at overload characteristics and while using discrete tones scaled to the right amplitude for frequency will solve all the issues, getting that info into nice clean plots is quite tedious and time consuming.

I really do appreciate this discussion and am glad you are here to support your product!


In addition, Once I get a good set of thd plots of signal level vs. frequency I need to then look at the individual harmonic components but at this point I am just slowly learning to crawl my way through the QA40x capabilities.

Hi @Dave, these are THDN plots via stepped sines, so if your signal is close to the noise floor, then THDN is dominated by noise and will look bad. My guess is those three big humps are too close to the noise, maybe? When the curves get mashed together it’s hard to tease out what is what. But, that can explain why lower levels might look worse.

Here’s an example of that. It’s a perfectly fine low-level static signal, it’s just lost in in the noise so THDN looks really bad. At the other extreme, an overdriven circuit will be awash in harmonics, so its THDN can look bad too.

I think before you start stepped sine tests, you need to first establish what you expect at the limits of your design. For example, start with a spec of what you need for input range level and distortion at 20 Hz, 100 Hz…validate that understanding with tones you manually control so that you can see the spectrum and understand where things are breaking down. And then, once you have that understanding, you can use those limits to inform your stepped sine sweep. If you just start running stepped sine sweeps, you end up with regions that don’t make sense and are confusing.

PS. I think in the exchanges above I was too fast and loose with the terms “swept or stepped” and that confused some language and/or I misunderstood, so I should be clearer:

Stepped Sines are the automated stepping of a sine wave. Your THDN plot is stepped sines.

Frequency Response Chirp is the used of an exponential chirp to learn the frequency response of the DUT.

To further clarify:

Frequency Response via chirp I think is very doable all in one shot on RIAA. But THD and THDN will require tones that are tailored (in amplitude) for each region and you won’t be able to do those in one shot.

It would be a sizeable job to apply the RIAA curve to a chirp. But it might not be so bad to apply RIAA weighting to the AMP THD versus Frequency Options (the plugin you used). That is, there could be check box where you specify “apply RIAA equalization” and then the level you specify becomes the 1 kHz level, and the other frequencies are compensated accordingly. More study will be needed, but I think get the issue now.

1 Like

I looked at a set of stepped plots of THD+N driving a iriaa and came up with these.

The full output of the QA40x is enough to clip the input at high frequencies. At 6kHz the distortion decreases as applied signal decreases as expected but below that at the lower input levels it appears as if noise dominates.

It seems that watching the FFT, the dominant noise is all lower in frequency than the fundamental and just looking at harmonics 2f, 3f, 4f… may be a better way to get valuable info. Is there a way to high pass the stepped test? Or possibly remover the “N” from the THD+N readings? In this case it appears the problem is the -40dBV (10mV) input signal @ 1kHZ is -60dBV @ 20Hz and to get a 0.1% reading the noise floor of the DUT needs to be 60dB below that or at -120dBV… not an easy task. I am now beginning to understand why many of the technical test records break things down to 20-1000Hz unequalized and 1000-20,000 with the riaa filter.

Another test what would be useful would be to plot individual harmonics wrt frequency. The dip at 4kHz of the +18dB plot in the above example is not a measured error but a point where for some reason the 2d drops dramatically and just after that drop the 3d-7d all jumps up equally. The apparent roloff of distortion of the -12dB line may be a similar notch happening at a higher frequency.