nRF5340 SAADC TASKS_CALIBRATEOFFSET calibration time missing in the product specification

Hello, the nRF5340's SAADC has a temperature dependent offset according to the product specification. The offset can be measured (I suppose it is automatically applied within the SAADC) via the TASKS_CALIBRATEOFFSET task.

I could not find any information about the calibration time in the product specification (v1.5). Did I overlook it or search in the wrong place? There is information about acquision time and conversion time, but not about offset calibration time.

Is there any information available? If it is already in the product specification, please let me know where. Otherwise, please let me know here and if possible, include it in the product specification (also for the nRF54 series). This information is important for modelling the delays in real-time systems during a re-calibration cycle.

Is the offset calibration time static or is it dynamic with a worst case upper limit?

One more short question regarding the input voltage. Is it safe to apply voltages that exceed the full-scale reference voltage to an analog input pin? E.g. internal reference voltage and gain are set to 2.4V, and 3.0V are applied at the pin. My understanding from the Absolute Maximum Ratings section would be that this scenario is physically safe for the SoC (AIN pins have same limits as other pins).

Best regards,
Michael

Parents
  • Hi Michael,

    I haven't heard from the team yet, but out of curiosity, what is your concern with the time here?

    Is it sufficient for you to wait for the calibration completed event, or do you need some planning with strict timing?

    Best regards,
    Hieu

  • Hi Hieu,

    we are using the ADC for a regulation loop, possibly running for a long time without being idle or performing a power cycle. If we enable re-calibration during normal operation, even if it happens very rarely, we have to take the delay into consideration as a worst-case scenario for our real-time system.

    Let's say, if the self-calibration time (including acquisition time, if there is any for self-calibration) is below ~ 20 µs, it would be perfect. If it is below 500-800 µs, it would be acceptable. If it is a couple of ms, we might have to take extra measures to allow the customer to decide between re-calibration and low latency operation.

    So, the best case would be if the calibration interval would fit into our real-time time slices. But it would also be possible to wait for the event to complete (skipping measurements over multiple time slices). What we need is a worst-case value which we can use in our calculations and algorithms.

    By the way, does the ADC's offset also drift over time? Or only over temperature? The PS says it changes over temperature. Wouldn't it then be possible to measure temperature-specific offsets beforehand (during SoC production) and program compensation parameters into the SoC already?

    Best regards,
    Michael

  • Hi Michael,

    I understood the concern now. I am still waiting for a reply from the relevant team.
    You will be updated as soon as I hear anything.

    Best regards,

    Hieu

  • Hi Michael,

    My apology for the long wait.

    The calibration time can be calculated with this equation:

    (tCONV + tACQ) * 2^(OVERSAMPLE) * 6

    There will also be a few clock cycles from when the calibration is completed to when the event is generated.

    If the SAADC is not already powered up, then you also need to add tPWRUP.

    Best regards,

    Hieu

Reply
  • Hi Michael,

    My apology for the long wait.

    The calibration time can be calculated with this equation:

    (tCONV + tACQ) * 2^(OVERSAMPLE) * 6

    There will also be a few clock cycles from when the calibration is completed to when the event is generated.

    If the SAADC is not already powered up, then you also need to add tPWRUP.

    Best regards,

    Hieu

Children
  • Hi Hieu, thanks a lot for looking into this topic! This really helps.

    I have two questions left:

    - which tACQ is used? The one I select in the CONFIG register? (This means would the calibration process depend on my circuit diagram which connects the analog input using a certain impedance?) And if the acquisition time is based on the configuration register, are there any requirements or suggestions on how to set it for optimal calibration?

    - is OVERSAMPLE the integer value that is written into the OVERSAMPLE register? Or the actual oversampling factor, which is actually 2^(register value)? I think the former would make sense, considering your formula.

    In short, the calibration time should be around six times the usual sampling time, considering that tACQ based on the configuration register is used...

  • Hi Michael,

    puz_md said:
    - which tACQ is used? The one I select in the CONFIG register? (This means would the calibration process depend on my circuit diagram which connects the analog input using a certain impedance?) And if the acquisition time is based on the configuration register, are there any requirements or suggestions on how to set it for optimal calibration?

    The value selected in the CONFIG register is used, yes. For calibration, you could set TACQ to 3us temporarily. If you do so, it should be reconfigured for normal ADC operations based on your input impedance, as you have known.

    puz_md said:
    - is OVERSAMPLE the integer value that is written into the OVERSAMPLE register? Or the actual oversampling factor, which is actually 2^(register value)? I think the former would make sense, considering your formula.

    Please use the integer OVERSAMPLE value from the OVERSAMPLE register.

    puz_md said:
    In short, the calibration time should be around six times the usual sampling time, considering that tACQ based on the configuration register is used...

    This is basically true, yes.

  • Thanks a lot for the detailed information! Thumbsup

Related