Testing a codec, Ain Aout

I was not seeing a lot of consistency with measurements of the APx555. I’m left with a lot of data that varies between runs/days. I would see immediate differences between bar graph data and interpolated data taken using stepped frequency or level graphs. These were often when there is an abrupt rise in the data and I try to obtain the values at the 1kHz frequency in a stepped frequency graph vs bargraph fixed at 1 kHz.
I resumed using the QA401 after losing access to the AP and am still trying to organize the data from the AP and compare to the QA401.
The QA401 GUI is much more visual. Being so, I see some results that are hard for me to determine what is going on with some of the eval boards, like below:


I don’t remember seeing this earlier, maybe something died on the eval board, or hopefully an operator error that you’ll catch right away. One is 48kHz sampling, the other 192kHz. This is on the ADAU1701,which has a FS of 1Vrms.

I have discovered that the ADAU1701 board has the above issue while the SSM2603 board has damaged output channels. The output waveforms look ok on the ADAU1701 but perhaps there is clock or chip damage of some sort. The output waveforms of the SSM2603 look bad - highly distorted.
That’s what can happen when there are no buffer op amps on the board…The 1787 and 1772 boards look good. Not sure about the 1401 board yet.

The large skirt at the bottom of the 1 kHz suggests something is up as that’s a lot of jitter. You might have something wrong with the clock/crystal on the DUT board.

Yeah I wonder what caused all this grief. Power is supplied by USB. Used the APx555 and the QA401.
No sign of ESD on the bench area and benches are ESD rated. Shouldn’t need to worry that much about an evaluation board. Perhaps overdriven inputs by an incorrect setting?

So… the loaner APX555 was returned and I’m back with the QA401 and my mind is fogging out. The results I obtained with the APx555 did not always make sense. Specifically I am looking at ADAU1787 results and Signal to Noise. QA401 gets 78dB and the APx555 got 75.4dB. I don’t understand why both of these are so low. The ADC is specd at 96dB and the DAC at 102dB. So what brings it down to 75.4dB?
I was starting to look at the signals with a scope compared to the QA401 time display. I’m confused by it. Says “Input Amp +/-1 Max.” - the Y scaling is +/-.25. I don’t see how this scaling is set. I back off level to just before clipping and see +/- .24. The scope is seeing +/-700mV or close to the specd 1.38Vpp. So .24 x what?

Hi @bklein, there can be a lot of reasons. The SNR depends on two measurement. First is the amplitude measurement. You need to verify that is correct for a given input. Usually SNR will be measured at -1 dBFS input. Was your output level at the specified level? The SNR is usually “best” at a single location, and the manufacturer will specify at or near that location.

Next, you need to confirm your noise. Is it as expected? If your amplitude is correct, then you know your noise. Is it more or less than the data sheet? Did you design the circuit yourself? Have you done a noise analysis on all the stages to ensure you won’t “break” the noise of the DAC?

On the QA401, the time domain is not absolute. It’s a dBFS measurement. On the QA402 and QA403, it is absolute. So, rely on your dBV measurement (in the freq domain) to learn the amplitude.

I am working with the ADAU1787 right now. Full scale voltage is .49Vrms, I had to back off to 469.4mVrms on the L peak meter to get below a distortion threshold. If I then switch from dBV mode to dBFS mode and look at Gen 1 amplitude, it is set to -12 dBFS. If I were to set this to -1 dBFS it would be distorting badly. So maybe I’m misunderstanding you here. ADI doesn’t say the best frequency. Noise is -90.3dBFS. N+D -90.6dBFS. Still just using the ADI eval board, which has no active components to the external I/O.
Maybe SigmaDSP is a can of worms that is best left alone if one is sensitive to performance characteristics? Using just their parametric sample eval doc design I was seeing out of band distortion. This went away though when I took out the filter and just direct connected the ADC to DAC.

Back to Crosstalk. Using the ADAU1787 430mV 1kHz. The APX555 got -101.104 in the One Channel Undriven test, -90.8dB in the One Channel Driven test. Not sure why they differ as I am always just using the two channels. With the QA401, I set the L channel to 430mV using the L Peak measurement on top screen. I get 13.75uV for Vpeak R channel. Use the 20log10 13.75uV/430mV I got -149.9dB. The R channel is muted. Am I missing something?

Believe it or not I still have not gotten an answer from Analog Devices as to why testing straight through their codecs result in low performance S/N and THD+N results. Has anyone out there/here done this? Maybe I’m supposed to equalize levels or something I’m missing.

Take a look at the arrow in the plot below. You can see the left channel was measured at -7.33 dBV, and the right channel measured -97.23. That means the cross talk measured in the right channel was 90 dB below the left channel level. Does that make sense?

I think you did a math error. Your formula 20*log10(13.75u/430m) = -90 dB.

Maybe I’m supposed to equalize levels or something I’m missing.

The silicon vendors are usually very, very good at specifying conditions for measurements. And TI and ADI are both excellent in that regard.

For the part you mentioned, ADI specifies THDN at the DAC at - 1 dBFS (which should yields -93 dB). For the ADC, they specify -88 dBFS THDN at -1 dBFS. I think if you ask ADI, they will say “We don’t measure loopback” and will want you to verify the individual pieces.

Note, too, that the DAC output is 1Vrms, while the ADC input is 0.49Vrms. So, even if you connect in loopback, you cannot get both to -1 dBFS at the same time.

Thanks, yeah I don’t know what I was doing with the equation but it was wrong.
I find it strange that loopback is not spec’d as there is the DSP in there that is apparently degrading things.