Strange behavior with I2S reference level and ADC input scaling

Use

  • GUI QA 40x version 1.197
  • I2S interface of front channel
  • ISO7640FM I2S isolator in 1 to 1
  • Drive the amplifier over I2S
  • readback the analog output via a 1:1 differential to asymmetrical filter box.

Use amplifier with 21dBV gain; reference definition of amplifier 0dFS = 1Vrms = 0dBV

Problem:
I noticed that the voltage measured by the FFT does automatic compensates with ADC input attenuator setting, which is not what I expected. Did not see this when using an analog signal with ADC to create the I2S signals

All automatic tests fail. f.e thd=f(Pout) and even show way too high output power not matching the right output signal.

Amp gain setting 21dBV
Gen setting -21dBV 1kHz → Vout = 1.4Vrms (scope). But expected 1Vrms
ADC Input Attenuation=18dBV → L: RMS Volts = 1.406Vrms
ADC Input Attenuation=24dBV → L:RMS Volts = 2.77Vrms (scope still 1.4Vrms)
Input attenuator 6dB change not compensated by readout…

Also notice this effect when doing automatic tests…
This effect is quite disturbing me…

Questions:

  1. is the reference level assumption of 0dBFS=0dBV =1Vrms correct assumption for QA403 I2S output?
  2. why is with I2S the signal measured not adapting to the input attenuator setting?

Am I doing something wrong here?

Noticed in this post that attenuation issue is fixed in next release: Changing RMS Voltage when changing attenuation

Changed to version 1.201 and firmware was upgraded to version 60.

Item 1:
seems not solved: At -21dBVS I expected still to get 1V rms on output after amp (21dBV gain). However I still get 1.4V (3dB off).
Item 2:
is now closed: Now in FFT the input attenuator scaling is compensated with v 1.201

New issues: in the Automated tests:
3: AMP-freq-response Chirp: does not report strange result, output signal is seen on scope of QA40x v1.201, but does not look like normal….
4A: AMP-Gain-and-Distortion versus Amplitude gets stuck in 0dB input attenuation when Autoset Input Range is enabled. Is logical, but should be fool-safe: can easily be solved in software by ignoring this function when the I2S source is selected,
4B: AMP-Gain-and-Distortion versus Amplitude; first measurement point (-40dBV input) gives a wrong gain 36dB, 2nd point is 24dB (should be 21dBV actually). When starting at a lower input level (-46dBV input) it goes ok

Will test more in the coming days.

Hi @JP-Huijser, for item 1, the GEN1 setting will be used for I2S. However, the units are dBFS instead of dBV. And dBFS has no absolute value as it depends on your hardware. We could make a PC DAC where 0 dBFS = 16.32 dBV or 0 dBFS could be 6.32 dBV. I think you are already aware of this, I just want to make sure for others searching this answer later.

Next, 0 dBFS can mean different things. Some treat it as the maximum peak to peak sine wave possible. Others might treat it as the maximum RMS. it really depends on the hardware. In the world of motor drive, motors are often driven by sine waves UNTIL you need to achieve max power. In that case, the sine will be distorted to a trapezoid to increase the apparent RMS (since a square wave has an RMS equal to your rail, which is sqrt(2) greater than a sine for a given supply).

And this might be happening in your case. If your output is severely clipped, then the sine has been pushed into a trapezoid and your additional RMS is coming from that. But if your sine is still relatively undistorted, then it suggests your real clipping point is sqrt(2) = 3 dB higher than you thought.

If you ask the QA40x to generate a 0 dBFS I2S signal, that will result in a sine with peaks at +/- 1.41, which will show clipping in I2S. You can modify that by setting the output gain to 3.01 dB I think. or just remember that -3.01 dB is actually 0 dBFS

Hi Matt,
Thanks for your reply.

Well, understand but still think something is not correct with I2S level generated.
Noticed at levels above -3dBV input level (automated test) the I2S is from QA403 is maxed out?

I did a test with reduced gain of my chip to avoid clipping of the chip and clipping of the input.
Then measured Gain=f(Generator input), which sort of confirms my finding.

Checking with PicoScope the i2S signals from QA403 and I notice that when signal level is above -3dBV it shows I2S decoded signals which do not deviate that much between -3dBV and 0dBV drive level

QA403 captures:
Gen -1dBV:

Gen -3dBV:

In this test the outputs are not distorted but seems like we cannot drive more above input level of -3dBV when using I2S.

Question:
Is there a REST- command to set the Output Gain? I cannot find this in the documentation.

Reason: If set in QA40x GUI then REST commands like RmsDbv are impacted by it. Want to keep it under control of the python code.

.

Noticed at levels above -3dBV input level (automated test) the I2S is from QA403 is maxed out?

Yes, this is correct. At -3 dBV = -3 dBFS, you are generating a sine that is +/- 1.0 (not volts, I2S units).

Since you are writing your own code, you could easily change things to ensure -3 is max. But, for for the next software release, we can change it so that 0 dBV results in a 0 dBFS with sine tips at +/- 1.0 I2S. Let me know if you think that change should happen.

Hi Matt,

That would be great if you can make such a change. It makes it more clean and matching expectation.

Questions:

  • is there a REST command to set the output-gain?
  • Is it possible to use the I2S input as well?

My amp has a data out as well which needs to be characterized.
It is mapped on same fs and bck.
Also supported is TDM upto 8 slots fs=48k or 4 slots fs=96k.

Some more info maybe interesting for enhancement in future QA products:

  • in mobile applications the Level of I2S / TDM is standard already moved to 1.8V and for new platforms moving to 1.2V.
  • APX is supporting TDM downto 1.8V but does not support 1.2V yet.

Thanks.

Hi Matt,

FYI:
Found another item which maybe should considered or deliberately not stored:
When I use I2S the status of that specific button is not stored.
So at the next start up or setting load this I2S needs to be set manually again.
Thanks.

Hi @JP-Huijser, yes, there are a few settings that don’t persist across sessions for various reasons. For example, Idle Tone generation will always revert to off when you re-start the app. This is so you aren’t surprised with 18 dBV signal going into a phono preamp. Averaging is also turned off, along with pre-buf settings. I will review. I think the entire front-panel subsystem is off by default at startup, including the front-panel power.