I have a series of evaluation boards for codecs under consideration. I am running through various QA401 test options with the QA401 output connected to the codec input and the codec output connected to the QA401 input. I’m thinking, perhaps naively, that I can obtain the various measurements and compare against each other to perhaps have enough information to choose one of them for our design. We do have the budget to obtain an alternative audio analyzer with digital audio I/O that will allow us to test A/D and D/A separately. I’m learning a lot with this QA401 setup though and would like to take it as far as I can. If you have ideas for a suggested approach I’m all ears. I am following the “Rapid USB DAC Evaluation” paper from your site. From this I now see how the analyzer gets to “know” the generator amplitude to use for 0dBFS.
Testing the ADAU1772, instead of pulling back .2dB (pg 5) I had to go -4dB. I get a THDL level of -76.6dB .01477%. THD+NL: -60.1dB and .09922%. dBV reading (middle of page 6) of 855.7 mVrms.
At the bottom of page 6 it says to set -90dBr to read Noise and Distortion. I do this and get an image similar to the one on page 7, but the THD+NL data fields are blank. I don’t get the field outlined in red in the photo either. Why doesn’t my data display match that in the manual? Oh, and another newbie question, in my setup should the 50ohm terminators go on the bottom two Output BNC’s or Input BNC’s (or do I need 4)?
Bottom two (-) Input BNCs
Hi @bklein, you do NOT want to put terminators on the outputs. If you do that, it can cause the output opamps to run hot. And if they run too hot, they will shut down and it might (probably won’t, but it might) cause a shift in performance inside the opamp.
What are the specific ADC and DACs you are considering? If you want to verify the performance of those devices on the eval boards, you need to ensure your measurement device is roughly 5-10 dB better. If you just want to verify you have met the spec on your own boards, you could probably get away with a measurement device that is 3-6 dB better.
In the case of the ADAU1772 you mentioned, that appears to have a typical ADC THDN of -93 and a typical DAC THDN of -87. So, if you are putting those two devices in line with each other and measuring the combination, then your best-case THDN will probably be -86 or so. I think you can measure that with some confidence on the QA401. But you will need to carefully pick your signal levels.
A QA403 could easily measure that, but it’s out of stock
Thanks, Matt
Thanks Matt. The SSM2603 (we use this in our products now), ADAU1401A (same as ADAU1701), ADAU1772, and ADAU1787.
The problem I wrote about THD not showing is resolved with the later software version (or just a restart?)
Green cursor lines may be gone as well.
There doesn’t seem to be anything in the manual regarding measuring crosstalk.
Was trying to follow the Measuring DAC paper. Now I see that signal is getting into the unused channel from the cabling and I need to ground the input to the eval board for the unused channel.
The eval board has a 3.5mm stereo input jack for inputs. I had two BNC coax cables joined into a “Y” near the 3.5mm stereo plug. I removed the BNC from the R output of the QA401 but that was not enough - a lot of signal was visible on the R channel output. So I soldered a 47 ohm resistor across the R input to ground in addition. Still I get noticeable R signal. See the attached photo.
Hi @blkein, got it thanks!
The easiest way to measure crosstalk when you have both channels running into a DUT is to mute one channel.
To do this, go into the GEN1 context menu and select MUTE LEFT or MUTE RIGHT as needed.
When muting is enabled, you’ll see MUTE show up in the upper left of the display to let you know what is going on. If you have selected MUTE LEFT, then the right channel will operate as you’ve specified in the generator setting, but the left channel will be filled with zeros aka silence.
If you are looking at high-speed testing, take a look at multitone. This has a special offset tone inserted at about 1 kHz. For the left channel, it’s a bit below 1 kHz. For the right channel, it’s a bit above 1 kHz. This makes it easy to see if the tone just below 1 kHz on the left channel is showing up on the right channel and vice versa. The multi-tone will also measure this automatically and report it for you. In the graphic below, you can see the cross talk is about -109 dB (left to right and right to left).
My software was stuck not generating the L Multitone and I had to reset out of it to fix.
In Multitone, if I change the Amplitude I get different xTalk results. You chose -30dB, why? A lot of this stuff is over my head.
Hi @bklein, multitone can get you into trouble quickly because the peak to average is so high. So, you always want to be on the safe side in terms of amplitude. This plot is from the QA40x, but look at the multitone setting–it’s -10 dBV. But you can see the peak excursion is over a volt. So, even though it seems like a small signal, the peaks can be much much larger than your average.
If you aren’t under time pressure from the measurement (eg not trying to test 60 units per hour per test bay) then I’d rely on tones and muting left and muting right.
In the Rapid DAC paper, page 6. Reference to level being close to 1.2V spec. Spec of what?
Hi @bklein, yes, the 1.2V is the spec of the DUT in that post.
Usually with DACs, your best THDN is achieved around -1dBFS or so, and your best THD is achieved around -10 to -20 dBFS.
If you are just looking at THD and THDN, there’s not much reason to convert to dBR because those are relative measurements.
BUT, to understand the converter range of operation, it is important to switch to dBFS. To do that, drive your DAC converter with a full scale signal. Below is a loopback with the generator set to -10 dBV:
Then press the DBR button:
The right click that button to active the context menu, and click the “Set display peak to 0 dBR”
That button click will close the dialog, and you should see your peak is showing 0 dBR.
Now, you can see everything relative to the converter full scale output. And that makes it easy to compere your measurements with the published specs.
I figured it out and was editing my post while you were responding (I’m editing this again too)
So once you do have the no artifact (your trace above) trace peak at 0dBr, you then kill Gen1 output, you’re left with noise. Would the RMS L be the S/N (RMS L= -85.2dBr)? My results are like the 2nd example on page 50. I disable the signal and then cut the range on the high end to eliminate artifacts and would expect the RMS dBr to be where the noise resides but it is much higher. RMS L -85.3 dBr and SNR-18.1dB with no signal, -80.9 with 1kHz signal. In a signal to noise measurement the is signal notched out and artifacts remain or does it disable the signal and work with noise that is not affected by the signal? What is making the SNR higher than what we visually see on the display?
Hi @bklein, when meausring noise, you want to make sure you are on the most sensitive input range (0 dBV full scale input usually). The noise floor increases as your input range increases.
So, if you short the L+ and L- inputs, with the most sensitive input, you should measure an RMS noise level of -115 dBV or so on a QA401, QA402 and QA403. the SSM2603 looks to have 1Vrms as 0 dBFS and 0 dBV, and a noise level of 100 dB (typical) below that. That means that when you measure the full scale output, you’ll read 0 dBV. Then you set that to REL mode, and that gives you 0 dBr. If you then drive zeros into your DAC, you’d expect to measure noise that is 100 dB below, which would be -100 dBr or -100 dBV.
Now, the noise measurement from Analog Devices is A-weighted, and if you disable A-Weighting, you’d measure 2 dB worse. And so, if you had a noise measurement of -98 dBr = -98 dBV you’d be about where you’d expect.
That measurement is 15 dB worse than the noise of the QA40x, so the QA40x has plenty of margin to measure.
I shorted the L-/+. The attached photo it is 114.8 dBV - so good, matches what you suggest… Hard to see but the green cursor lines are back - left of the 100 Hz and about -148 dB . Shouldn’t I see a signal approach the 114dB amplitude to have that SNR though? I would think it would be -138 dB area (which would be bogus, but what makes the result higher?). Is this because there was a momentary hit during the averaging in the -115dB area that was not kept on the display?
HI @Bklein, what you are measuring here is the RMS from 20 to 20 kHz. That means every FFT bin is squared, summed and then the sqrt is taken of the result. If you change the FFT size, note the RMS reading stays the same BUT the level of the noise floor will change. So, higher FFT gives you more bins, but each bin in lower in amplitude. If you FFT gets too small, then you’ll see the noise degrade because of low-frequency noise. So, generally keep the FFT size large enough so that the lower spectrum (around 20 Hz) isn’t too high. Play around with FFT size and RMS of the noise and you’ll figure out the tradeoffs quickly.
So, now that you have a meaningful RMS measurement with the input shorted, connect the analyzer to the output of your DAC and feed your DAC all zeros and share that graph.
I don’t have any way of “feeding the DAC zeroes” using the serial audio interface of the codec but I may be able to do it within SigmaDSP. I’ll try.
The plan is to obtain another analyzer in the near future but learn what I can with the QA401 until then. I’m learning a lot but still much of this confuses me. We were hoping to compare results between the codecs but some results are way different - probably due to the test parameters I set. I was trying the ADAU1701. Also I may have found a bug or misunderstanding with the SigmaDSP programming of the ADAU1772. Add a parametric filter and noise jumps and center frequency is 2x what it should be with 48 vs 96 kHz sample frequency. Not a QA401 problem - it proved there is one with the codec programming…
Are you running the DAC as master or slave? If you have clocks and framing present (eg slave), you can short the data input and will result in all zeros.
I’m not familiar with that part of configuration - but the eval board is being run in default jumper configuration with no connections to external boards, so I would imagine it is in Master configuration.
Hi @bklein, OK, so if the DAC is currently sending out all zeros, then you can make a noise measurement that would be valid. The exception might be if the DAC goes into a low-power mode if it sees a string of all zeros.
So, if you can, make a measurement of the noise in DBV mode and paste it here
I don’t know what the DAC is doing. I can mute the ADC, the DAC, the PGAs, maybe more but I don’t know if it zeroes the digital signal. ADI still has to get back to me on how.
Hi Matt,
I’ve been away with a demo APx555. I got several measurements with that and now its gone and I am trying to compare those with what I get with the QA401. Sorry to jump around that’s my life right now.
With the Multitone test, how is the Amplitude setting determined? Is it a manual process to drop it until the base level of the plot cleans up? It was set pretty good when I first looked at it and I don’t know how it got that value in the first place. The codec I’m looking at is ~.48V full scale rather than 1V. So I started with -6dB but end up with like -14dB as an amplitude setting. EDIT> I see you discussed this earlier in this thread. So I need to keep the input signal Vp less than FS for the codec I’m testing…
With the Multitone test, how is the Amplitude setting determined?
Hi @bklein, the setting you specify is the RMS level of the tones. And yes, as noted, the number of tones means it is very easy to overload a DUT because the peak to average can get very high.
Was there a measurement you made with the APx555 that you cannot replicate on the QA40x?