This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Effective impedance of ADC channel in single-shot mode with oversampling in burst mode

Hi there,

I have suitably designed a battery sampling circuit in my prototype configuration, (Vdd = 1.8V, battery swing = 4.2V to 2.9V) using the app note provided at: https://devzone.nordicsemi.com/b/blog/posts/measuring-lithium-battery-voltage-with-nrf52.  Now that we're revising the design for production, I need to save every joule possible, meaning I need to have the quiescent current through the resistor divider as low as possible.  Current network total resistance is ~6Mohm => ~650nA @ 3.8V nominal.

I found ADC performance a bit flaky in pure single-shot mode (and understandably so), so I have implemented the channel in burst mode with 8x oversampling - it works very nicely.  Sampling is performed at approximately 1 minute intervals.

The app note states that "the sampling frequency should be chosen such that the SAADC input impedance is much larger than the resistor values in the voltage divider", and goes on to state that Rinput = 1 / (fsample * Csample), Csample = 2.5pF.  The app note then assumes a frequency of @ 1 sample per second, to arrive at an effective input impedance of 400Gohm.  It looks like this analysis is performed in pure single-shot mode, once each second, which I feel is neither particularly realistic nor energy efficient.

Assuming I've set my acquisition time to 3us (meaning burst mode can sample at ~5us intervals), the result is a 200kHz sampling frequency - albeit for only 40us.  At 200kHz, the resultant Rinput = 2Mohm, which is < the total network resistance of 6Mohm in my implementation (?!!?!).  To me this suggests that the simplification for estimating Rinput isn't necessarily correct in this case.

I could test various resistor values through trial and error, but I'd prefer a more analytical method.  I was hoping that Nordic could provide some correlation for estimating input impedance assuming that the ADC is used in single-shot burst mode, for a given acquisition time, sample interval, and oversampling rate.

Thanks in advance for the guidance!

-Z

Parents
  • OK - so I had a think about this.  Really, the question is how much of a voltage drop does the external 10nF buffer capacitor (Cext) cop for a single sample.

    For the purpose of the analysis, I've assumed that:

    1.  Cext does not charge at all during an acquisition cycle.

    2.  Cext fully charges between acquisition cycles.

    3.  The resistance observed at the high-potential side of Cext is the combination of parasitic input resistance (~1Mohm), and ADC resistor network (160kohm) - both from SAADC elec spec part of datasheet  => 138kohm.

    If we sample for 3us and 8x oversampling, we are effectively drawing from Cext for 24us - the recovery gap in-between would charge Cext anyway, so this is the most conservative approach.

    Using the standard capacitor discharge relationship V(t) = Voexp(-t/τ), where:

       Vo = 1.8V (max), 1.2V (min)

       t = 24us

       τ = 1.38E-3 (RC => 138E3 * 10E-9)

    This yields V(24us) = 0.982Vo, meaning we lose 1.8% of voltage during a single sample => acceptable error.  The only other check becomes ensuring that Cext has sufficent time time charge up between samples, which is trivial.

    So it looks like I answered my own question.  If there are any glaring holes in my analysis, please advise!

    - Z

Reply
  • OK - so I had a think about this.  Really, the question is how much of a voltage drop does the external 10nF buffer capacitor (Cext) cop for a single sample.

    For the purpose of the analysis, I've assumed that:

    1.  Cext does not charge at all during an acquisition cycle.

    2.  Cext fully charges between acquisition cycles.

    3.  The resistance observed at the high-potential side of Cext is the combination of parasitic input resistance (~1Mohm), and ADC resistor network (160kohm) - both from SAADC elec spec part of datasheet  => 138kohm.

    If we sample for 3us and 8x oversampling, we are effectively drawing from Cext for 24us - the recovery gap in-between would charge Cext anyway, so this is the most conservative approach.

    Using the standard capacitor discharge relationship V(t) = Voexp(-t/τ), where:

       Vo = 1.8V (max), 1.2V (min)

       t = 24us

       τ = 1.38E-3 (RC => 138E3 * 10E-9)

    This yields V(24us) = 0.982Vo, meaning we lose 1.8% of voltage during a single sample => acceptable error.  The only other check becomes ensuring that Cext has sufficent time time charge up between samples, which is trivial.

    So it looks like I answered my own question.  If there are any glaring holes in my analysis, please advise!

    - Z

Children
Related