Hi @DaveD,
The code measures the system gain, and then uses the output level for the test plus the system gain to determine the appropriate input level. The assumption is that the gain is relatively constant over the region of operation.
And thus the target input range is the output level + any input gain you specified + system gain. This value is then divided by 6, the ceiling is take of that, it’s converted to int and then mulitplied by 6. And then clamped to 0 to 42 dBV.
I’ll be a bit long-winded here in order to make sure this is correct:
Case 1:
Single ended loopback (sysGain of zero), no input gain specified:
The output gen is set to 0 dBV, and the full scale input is set to 6 dBV. This is correct.
Now set the offset to 12 dBV, and the full scale input is set to 12 dBV. This is correct.
Case 2: Balanced loopback (sysGain of 6 dB) no input gain specified:
The full scale input is set to 18 dBV. This is correct.
Case 3: Balanced loopback (sysGain of 6 dB) and specify 6 dB of input gain. Run the same test as last time:
Full scale input is set to 18 dBV. This is correct, because we have a sysGain of 6 dB, we specified a 12 dB adder and the output level is 0 dBV.
Case 3 (this is a bit closer to what you were seeing, but without the amp). Single ended, no input/output gain specified:
This picks 0 dBV full scale input. This is correct.
And again with -5:
This picks +6 dBV input. Because specified a +6 dB adder, -5 + 6 = 1, and that would round up (due to ceiling).
Is that the same issue you are seeing?
I think the tooltip message could be fixed and improved to take the determined system gain into account. If that is done, what is your preference on how this should operate?