This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Delay vs. event-driven approach

I was wondering whether it is safe to use a delay (such as implemented by nrf_delay_us) to, for example, delay an iteration of a while loop, or that it is always better to use timers or an event-driven approach to achieve the same. On the one hand, I don't want to introduce too many timers, but on the other hand I don't want the delay to negatively impact the behaviour of the rest of the application.

  • Using delays is not a good idea, as it will be very imprecise and extremely current intensive way of writing code for most CPUs. You don't write which chip you're working with, but for instance on the nRF51822, using a delay loop will consume ~4.5 mA of current, while starting an RTC timer (for instance using app_timer), and then go to sleep will consume ~3 µA. That's a 1000-fold difference, which in most cases should be sufficient reason to use the timer approach.

    Additionally, any interrupts that happen while in a delay loop will significantly affect the timing. If you for instance have a softdevice interrupt, which have a max blocking time of several milliseconds, while the application hang in a delay-loop, the delay will actually be extended with the same amount of time. Also, the delay functions haven't necessarily been made very accurate in the first place, and may in themselves introduce significant inaccuracy in sleep times.

    In summary, I'd strongly recommend you to not use delay loops in production code, but instead use the RTC timers for all time dependent tasks. Even though the numbers in the above are applicable for the nRF51822, the general arguments would be the same for most modern microcontrollers.

Related