This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

ADC sampling frequency 2.5kHz - limitations

There have been several topics on limitations on ADC sampling rate when BLE is in connection.

However, I didn't see notes applicable to the following case:

nRF52832, sampling rate 2500 Hz ( 0.4 ms) . I am using 1 channel only and a timer event to start the sampling, no EasyDMA nor double-buffering. In this case could I expect any limitations in the sampling rate due to handling BLE communication; also, there have been advises to stop the radio while sampling for cleaner ADC output. Are these applicable at such a sampling rate, and where to find an instruction/example how to implement?

Thanks

  • Hey kont40,

    If you do not use EasyDMA and double buffering you will get an unknown penalty to the sample rate due to the fact that the SoftDevice has the highest execution priority during BLE events. With DMA the execution priority is irrelevant since you can process the samples at a later stage when the CPU is available. 

    You cannot halt the radio while the SoftDevice is active, but you can use the Radio Timeslot API to schedule tasks that are guaranteed to execute when the SoftDevice is not using the radio. 

    Cheers,

    Håkon.

  • Dear haakonsh,

    Thanks for the reply. I apologize for the long message below.

     

    1) Above I notice some inaccuracy both in my initial statement and comment that follows it: we cannot use the SAADC without EasyDMA. The approach to save energy by not using EasyDMA is to initialize/initialize the ADC every time before and after sampling, which I am doing.

     

    2)  As to the more important question about sampling frequency accuracy and related latency for the case when sampling is started in a timer event. Below I express my understanding:

     

    If we set the sampling frequency timer (e.g. TIMER1) and SAADC interrupt priority levels at level 2 then we only experience latency from SoftDevice interrupts at levels 0 and 1, and TIMER1 interrupt would not preempt these SoftDevice interrupts.

    In that, the only choice is to use EasyDMA, initialize it every time at TIMER1 interrupt before starting the sampling, and un-initialize it every time at an SAADC event upon reading the current sample. In that scheme we don't need double buffering, and using only one buffer would be fine, which will hold the current single sample only.

     

    Assume the device acts as a peripheral and is already connected.

     

    Then we consider SoftDevice-induced latency as described here:

    http://infocenter.nordicsemi.com/index.jsp?topic=%2Fcom.nordic.infocenter.s132.sds%2Fdita%2Fsoftdevices%2Fs130%2Fprocessor_avail_interrupt_latency%2Fexception_mgmt_sd.html

     

    tISR(4) should not take pace as the timer/SAADC as interrupt of level 4 would not preempt interrupts of  levels 2 or 3 . Thus, in the worst case, the latency to start the sampling will be approx. 178 us (but not 258 us). As to reading the data from the SAADC, the latency of reading the result is not important, provided we read it before the time of the next sampling.

     

    As TIMER1 is once initialized in timer mode, a single-time latency does not affect the time accuracy of subsequent interrupt events. This implies that for some applications, the effects of those occasional single latencies of 187 us could be ignored.

     

    Also, SoftDevice interrupts of levels 0 and 1 will interrupt the application at periods equal to the connection interval. If we have SAADC sampling frequency of 2000 Hz, the sampling period is 500 uS. If the connection interval is set to 10 ms (10 000 uS) it means that at every 20 samplings there will be once to experience the latency of SAADC.

    During a connection event, the number of interrupts will be determined by the total length of data to send and MAX_PACKET_PAYLOAD_SIZE. For a given amount of data, setting larger MAX_PACKET_PAYLOAD_SIZE determines a lower number of  SoftDevice interruptions during the connection event; in the ideal case when having to send only one packet no additional SoftDevice interrupts will take place in the connection event. In that, if the peripheral has a single packet to send, the length of the packet has no influence over the SoftDevice latency.

    Please, confirm whether above statements are correct.

     

    3) Measures to avoid the SoftDevice latency of 178 uS:

    If we succeed to synchronize the TIMER1 events with the connection events, the start of sampling could happen before a connection event takes place, and this will be a way to completely avoid the influence of the SoftDevice latencies over the start of sampling time. Is this correct? If so, how to technically implement it? I am missing an appropriate example (SDK 14.0.0, SoftDevice v. 5.0.).

    Also, "when the SoftDevice is not using the radio" (i.e. a timeslot) should be intervals not only between two subsequent connection events, but also between sending subsequent packets during a connection event, or other states of the BLE protocol. So what we need is to tie the TIMER1 events (start of sampling) to happen at a fixed interval before a connection event when such an event is upcoming. Is there a simple way to achieve that without using Radio Timeslot API (e.g., upon establishing a connection to detect the time of 1 ms before a connection event occurrence and start TIMER1 at that time) ?

    Please, suggest.  

    If we have a way to detect when sampling is performed with a SoftDevice-induced latency, we could take some corrective measures for that sample. So, is there a way to detect when latency takes place in sampling?

  • There is also the Radio Notification events from the SoftDevice, who'll send an event to the application before a radio event will occur.

     Radio Notification Event Handler API reference.

Related