This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

nRF52832 SAADC offset calibration documentation

Hello all,

My use of the nrf_drv_saadc causes failures due to incompatible (driver) state, and seeing that other people are having various issues with it, before spending too much time on this issue (or creating a distributable repro), would like to know the following:

  1. SDK API reference notes that one should check the PS in order to know when the offset calibration should be done. However, the latest PS does not contain any information on it (at least searching for 'calibration' did not yield any information on this). How often should offset calibration be done? If this is temperature related, what kind of temperature changes require rerunning the offset calibration?
  2. What is the impact of not ever doing offset calibration (PS data seems to be silent on this)? Ballbark of expected errors is acceptable at this point (we have other sources of errors in our system, so just trying to get a fix on priority at this point).
  3. Where to find the duration of the offset calibration (avg/max)? Not listed in PS.
  4. How long is an offset calibration valid? What if SAADC is uninitialized and then initialized later, is offset calibration lost during this time?
  5. My mode of operation is such that SAADC is uninitialized most of the time (uninit of the driver called). Once a while, SAADC is initialized, 4 channels are initialized, sampled and deinitialized in a row (BURST, x64-oversampling, one channel at a time, SCAN+DMA cannot be used since channels are swapped occasionally). I have extended this model by requesting for the offset calibration after SAADC is initialized, but before any channel is initialized. Calibration seems to take about 4ms and upon receiving callback on this the next conversion start will cause the failure code return. No nrf_drv_saadc -functions are called from within the callback (since it runs in ISR and nrf_drv_saadc does not look very safe to use from ISR), but instead are run later from regular context. Is there anything especially wrong in this process? I have tried sprinking "aborts" here and there, but nothing really changes much.

Driver versions tried: 12.1 (actual version of SDK) and I've backported all of the "bugfixes" that have targetted the SAADC driver all up to 14.1, but issue remains.

Without calibration, sampling works without issues (without SCAN).

Hints appreciated & br.

Parents
  • Hi,

    1. It is recommended to at least do calibration on startup, and on major temperature changes. How often you should do calibration will depend on your application needs, but every 1-2 degrees change should be ok.

    2. This will be chip dependent. Your samples will be offset by the offset error in each reading. How this affects your readings, compared to the ideal value, will depend on other error sources. I would recommend you to input 0 to the SAADC, take some samples, to offset calibration and take new samples. Then compare the samples before and after offset to see get the offset error.

    3. The times used by offset calibration is dependent on the configured aquisition time. We do not have any available values for this in the PS, but you can find some typical values in the table on this GitHub page.

    4. The calibration is valid until chip reset.

    5. What do you mean by this: "SCAN+DMA cannot be used since channels are swapped occasionally"? What failure code is returned? The low-power SAADC example on our GitHub shows how to do offset calibration on SAADC that is turned off between samples, to save power.

    Let me know if you have some follow-up questions or comments.

    Best regards,

    Jørgen

Reply
  • Hi,

    1. It is recommended to at least do calibration on startup, and on major temperature changes. How often you should do calibration will depend on your application needs, but every 1-2 degrees change should be ok.

    2. This will be chip dependent. Your samples will be offset by the offset error in each reading. How this affects your readings, compared to the ideal value, will depend on other error sources. I would recommend you to input 0 to the SAADC, take some samples, to offset calibration and take new samples. Then compare the samples before and after offset to see get the offset error.

    3. The times used by offset calibration is dependent on the configured aquisition time. We do not have any available values for this in the PS, but you can find some typical values in the table on this GitHub page.

    4. The calibration is valid until chip reset.

    5. What do you mean by this: "SCAN+DMA cannot be used since channels are swapped occasionally"? What failure code is returned? The low-power SAADC example on our GitHub shows how to do offset calibration on SAADC that is turned off between samples, to save power.

    Let me know if you have some follow-up questions or comments.

    Best regards,

    Jørgen

Children
  • Thanks for the prompt reply, much appreciated!

    I'm working on something else for a while and hope to return to this issue later. As we can't easily feed external reference voltages with our design, will try to take the example code for a spin on a devkit and attempt to recreate the problem case there (and do it in a way that allows code sharing).

    The example code (referred in your answer) only calls saadc_init once however (does not nrf_drv_saadc_uninit between sampling runs, which is what I'm doing). The reason why I'm doing init+uninit is related to avoiding extra power consumption. While the SDK API documentation does not document whether there are power use impacts on having nrf_drv_saadc_init called, it felt safer to do an nrf_drv_saadc_uninit after each acquisition run to save power. If this is not needed (there is no power impact on having called _init once at start only), that would be nice to know.

    The "SCAN+DMA" issue is not really related to the offset calibration issue, sorry for the confusion. It was the ideal mode of operation that we used for acquisition in general, but had to switch to single channel acquisition with BURST since with SCAN+DMA the channels sometimes get swapped (noticed that other people have had similar issue in some cases, but there was no answer that worked for us). But, this is a separate issue and not related to the offset calibration.

    Will report back later (perhaps in a week or two).

    br, ak.

  • You are correct, it does not call uninit, but it use the low_power configuration mode introduced in SDK 12, which will stop the SAADC between samples to limit the EasyDMA current. This works mostly in the same way as your uninit/init method.

    If you have not already seen it, please have a look at this thread about the issue with SCAN mode.

  • Hello again. Ah, this is good to know (re low_power). The impact/operation of the low_power mode was not documented in the SDK API ref, and I've been using mostly defaults contained in NRF_DRV_SAADC_DEFAULT_CONFIG so it's been off so far.

    As to the SCAN mode issue, I've seen that answer but sadly it doesn't seem to apply to our situation. I don't start new sampling from the callback but from regular context after a FreeRTOS reschedule (some microseconds later), so should be safe from that particular race. Perhaps I'll make an isolated reprocase for this later as well, but at the moment it's unlikely we'll switch back to that model (t_acq >= 10usec anomaly with SCAN, and no BURST support in that mode, so requires PPI to drive SAMPLE and with safe timing margins this results in much longer total sampling times than we now get with BURST+hw-oversample, although we also now get larger temporal de-correlation between samples across channels than we'd like but perhaps this is not a big issue for our current application).

  • Ok, let me know when you have had time to look at this again. By the way, oversample will give correct results when combined with scan mode if BURST is enabled for all channels, as described in this thread.

Related