Seeking more information on NRF52833 ADC, reference, and buffers

Hi,

We are developing a product which is reading RTDs and thermistors from the NRF52833's internal ADC. We're working on compensating for the intrinsic errors of the ADC and gain buffers and have some questions not covered in the product specifications.

Do the ADC gain buffers have offsets associated with them? The datasheet says the ADC has an intrinsic offset of ±2LSB @ 10bit res but doesn't mention whether there are voltage offsets associated with the buffers.

Is there any procedure to internally determine the ADC buffer gains? For example, connecting the buffer inputs directly to the reference voltage?

Can you provide any more detail on what the ADC's internal calibration routine does or what it accounts for?

Are there any details on how the ADC input buffer gain is implemented?

Are there any specs on channel isolation in sequential measurement mode?

Is there a spec for temperature drift of the VDDH/5 divider?

Thank you!

Parents
  • Sorry, another question:

    In Product Spec v1.7, section 6.21.2.3 (SAADC Scan mode) it says "The time it takes to sample all channels is less than the sum of the conversion time of all enabled channels. The conversion time for a channel is defined as the sum of the acquisition time tACQ and the conversion time tCONV."

    Why is the total conversion time less than the sum of the channels? What is skipped? What is the expected conversion time?

    Thanks

  • Do the ADC gain buffers have offsets associated with them? The datasheet says the ADC has an intrinsic offset of ±2LSB @ 10bit res but doesn't mention whether there are voltage offsets associated with the buffers.

    1) The input buffer of the ADC has auto-zero for offset. The offset of the ADC core should be scaled with the gain. 

    Is there any procedure to internally determine the ADC buffer gains? For example, connecting the buffer inputs directly to the reference voltage?

    2) Only for VDD as reference. There is an undocumented TASKS_CALIBRATEGAIN that connects the input to VDD/4 and uses gain 1/4

    Can you provide any more detail on what the ADC's internal calibration routine does or what it accounts for?

    3) The offset shorts the ADC input, and then a digital feedback loop tunes a current DAC in the SAR comparator to compensate for the offset. The offset may introduce a noise component if the TASKS_CALIBRATEOFFSET is triggered often

    Are there any details on how the ADC input buffer gain is implemented?

    4) Sorry, no details, but it's a switched capacitor gain stage with high common mode rejection. 

    Are there any specs on channel isolation in sequential measurement mode?

    5) There is only one core ADC, the muxing is done through T-gates (transmission gate, pull down, transmission gate), so there should be very little cross coupling between channels. 

    Is there a spec for temperature drift of the VDDH/5 divider?

    6) No, but it's a resistive divider, as such, as long as the ADC input settles, should not change with temperature (although the resistors do change with temperature, the relative size does not). The ADC temperature coefficient will dominate.

    jdub said:
    Also - any details on the accuracy of the internal VDD/4 divider which can be used as an ADC reference? We are also finding that when using the VDD/4 reference, our signals are much noisier. Any specs on that noise or advice for reducing it?

    7)  The noise that you're seeing is probably coming from VDD. There is a low pass filter inside the ADC, but that is in the MHz range. 

    jdub said:
    Is the VDDH/5 input only enabled when REG0 is on? I'm working on a device where VDDH and VDD are tied together. The 1% spec on the VDDH/5 divider is better than the 3% spec on the ADC's input gain buffers, so I have been trying to use the VDDH5 input. But it behaves like it's floating (the reading seems arbitrary and the value decreases with sample rate, implying a loading effect)

    8) That does not matter. VDDH/5 divider goes to the input gain buffers. I'm not sure it's enabled without the 5V regulator is enabled.

    jdub said:
    Does the ADC sequence converter fail if you use the CH[X}.CONFIG register's RESP and RESN settings to set inputs at VDD/2 while setting CH[X}.PSEL and NSEL to Not Connected? I can take a valid measurement in this configuration if I sample as a one-off, but if I make this measurement part of a sequence the ADC never finishes.

    9) You need to enable the PSEL/NSEL, but not connect them to anything. Try writing 0xFE to both.

    jdub said:

    In Product Spec v1.7, section 6.21.2.3 (SAADC Scan mode) it says "The time it takes to sample all channels is less than the sum of the conversion time of all enabled channels. The conversion time for a channel is defined as the sum of the acquisition time tACQ and the conversion time tCONV."

    Why is the total conversion time less than the sum of the channels? What is skipped? What is the expected conversion time?

    10) Not sure why it's written like this as it's very confusing. We expect the conversion time to be N x (TACQ + t_conv).

Reply
  • Do the ADC gain buffers have offsets associated with them? The datasheet says the ADC has an intrinsic offset of ±2LSB @ 10bit res but doesn't mention whether there are voltage offsets associated with the buffers.

    1) The input buffer of the ADC has auto-zero for offset. The offset of the ADC core should be scaled with the gain. 

    Is there any procedure to internally determine the ADC buffer gains? For example, connecting the buffer inputs directly to the reference voltage?

    2) Only for VDD as reference. There is an undocumented TASKS_CALIBRATEGAIN that connects the input to VDD/4 and uses gain 1/4

    Can you provide any more detail on what the ADC's internal calibration routine does or what it accounts for?

    3) The offset shorts the ADC input, and then a digital feedback loop tunes a current DAC in the SAR comparator to compensate for the offset. The offset may introduce a noise component if the TASKS_CALIBRATEOFFSET is triggered often

    Are there any details on how the ADC input buffer gain is implemented?

    4) Sorry, no details, but it's a switched capacitor gain stage with high common mode rejection. 

    Are there any specs on channel isolation in sequential measurement mode?

    5) There is only one core ADC, the muxing is done through T-gates (transmission gate, pull down, transmission gate), so there should be very little cross coupling between channels. 

    Is there a spec for temperature drift of the VDDH/5 divider?

    6) No, but it's a resistive divider, as such, as long as the ADC input settles, should not change with temperature (although the resistors do change with temperature, the relative size does not). The ADC temperature coefficient will dominate.

    jdub said:
    Also - any details on the accuracy of the internal VDD/4 divider which can be used as an ADC reference? We are also finding that when using the VDD/4 reference, our signals are much noisier. Any specs on that noise or advice for reducing it?

    7)  The noise that you're seeing is probably coming from VDD. There is a low pass filter inside the ADC, but that is in the MHz range. 

    jdub said:
    Is the VDDH/5 input only enabled when REG0 is on? I'm working on a device where VDDH and VDD are tied together. The 1% spec on the VDDH/5 divider is better than the 3% spec on the ADC's input gain buffers, so I have been trying to use the VDDH5 input. But it behaves like it's floating (the reading seems arbitrary and the value decreases with sample rate, implying a loading effect)

    8) That does not matter. VDDH/5 divider goes to the input gain buffers. I'm not sure it's enabled without the 5V regulator is enabled.

    jdub said:
    Does the ADC sequence converter fail if you use the CH[X}.CONFIG register's RESP and RESN settings to set inputs at VDD/2 while setting CH[X}.PSEL and NSEL to Not Connected? I can take a valid measurement in this configuration if I sample as a one-off, but if I make this measurement part of a sequence the ADC never finishes.

    9) You need to enable the PSEL/NSEL, but not connect them to anything. Try writing 0xFE to both.

    jdub said:

    In Product Spec v1.7, section 6.21.2.3 (SAADC Scan mode) it says "The time it takes to sample all channels is less than the sum of the conversion time of all enabled channels. The conversion time for a channel is defined as the sum of the acquisition time tACQ and the conversion time tCONV."

    Why is the total conversion time less than the sum of the channels? What is skipped? What is the expected conversion time?

    10) Not sure why it's written like this as it's very confusing. We expect the conversion time to be N x (TACQ + t_conv).

Children
  • Thank you Ketil, I hope Nordic will public more details in the next version of the product spec :)

  • Sorry, one other direct question came up, and then a high level request.

    Direct ADC question:
    The INL is listed as 4.7 bits, which is quite significant. Do you have any sense of whether the error at a given part of the transfer function is stable, or does it shift with time/temperature/supply voltage? Ie. Can we characterize the non-linearity for a given part and rely on that profile to calibrate future readings?

    High level question:
    Do you have any application notes related to reading a RTD or other high precision, low sensitivity resistance sensor?

    In our application we are very space constrained and must do so with a simple voltage divider setup, with a fixed resistor on top and temperature sensor on bottom. By taking differential measurements with the same gain settings across the different legs of the voltage divider, we can cancel out any tolerance errors on the references or gains of the ADC and buffers. But our final temperature reading still has more error than we expect, so we are trying to find ways to improve it.

    Thank you again

  • Could you clarify the specific resistance range (at min to max temperature) of the RTD you are measuring?

  • jdub said:
    Direct ADC question:
    The INL is listed as 4.7 bits, which is quite significant. Do you have any sense of whether the error at a given part of the transfer function is stable, or does it shift with time/temperature/supply voltage? Ie. Can we characterize the non-linearity for a given part and rely on that profile to calibrate future readings?

    In 12bit mode the INL is 4.7 LSBs, not bits. In 10-bit mode its around 1 LSB. The errors will likely be systematic. It is possible to compensate for some of the effects, however, it does require per device calibration. For example https://ieeexplore.ieee.org/document/7993659

    jdub said:
    Do you have any application notes related to reading a RTD or other high precision, low sensitivity resistance sensor?

    No, sorry

    jdub said:
    In our application we are very space constrained and must do so with a simple voltage divider setup, with a fixed resistor on top and temperature sensor on bottom. By taking differential measurements with the same gain settings across the different legs of the voltage divider, we can cancel out any tolerance errors on the references or gains of the ADC and buffers. But our final temperature reading still has more error than we expect, so we are trying to find ways to improve it.

    If you're using resistive input it's important that you select a TACQ that is long enough compared to the source impedance, see the product specification. 

Related