I have take the basic ble_app_hrs_with_timeslot and modified it to calculate the timing of subsequent calls to the timeslot START event.
Specifically,
-
take note of the current time when START is called:
case NRF_RADIO_CALLBACK_SIGNAL_TYPE_START: app_timer_cnt_get(&EVENT_TICKS);
-
near the end of the event (TIMER expiry) set the interrupt to allow the main task to run:
NVIC_SetPendingIRQ(SWI3_IRQn); //Set this interrupt if you want code execution to proceed in the main thread after timeslot execution.
-
and within main task, use UART to print out the difference in time between successive START calls.
The result are successive times that are not terribly accurate; setting a 1000ms "distance_us", I get the following times:
997, 1004, 1024, 991, 999, 1024, 1024, 1003.
From other posts here, I understand the timing of the API to be quite accurate (order of uSec). That said, given I'm calculating the difference between successive STARTs, I'm not sure what assumptions I've made that cause this to show inaccuracy.
Q: Is it actually more accurate than the example is showing (due to other latencies)?
or
Q: Are there ways to make it more accurate?
Note: I am using the evaluation kit (PCA10001) and SoftDevice 7.1
Appreciate any insight you might have.
Thanks