Context: in my application, I need to know rather precisely when the SAADC is taking a sample. So I did a test and discovered something interesting.
I'm triggering the SAADC with a timer measuring the total conversion time from the end of the call to `nrf_drv_saadc_sample()` to the entry into the `adc_event_handler()`. What I found was a constant overhead of 5 microseconds, except when the SAADC_ACQTIME was set to 5uS or 3uS, in which case the overhead was 6uS and 8uS respectively.
In the table below, T0 is the time at which the call to `nrf_drv_saadc_sample()` returned (thus initiating the conversion) and T1 is the time at which `adc_event_handler()` gets called (deduced by toggling a GPIO pin and measuring the time on an oscilloscope):
SAADC_ACQTIME | T1-T0 |
3 µsec | 11 µsec |
5 µsec | 11 µsec |
10 µsec | 15 µsec |
15 µsec | 20 µsec |
20 µsec | 25 µsec |
40 µsec | 45 µsec |
So my question: is there a reason that setting the acquisition time to NRF_SAADC_ACQTIME_3US or NRF_SAADC_ACQTIME_5US results in a longer overhead? Does the extra overhead happen before the adc starts to sample, or does it happen after the sampling is complete?