<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://devzone.nordicsemi.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Delay vs. event-driven approach</title><link>https://devzone.nordicsemi.com/f/nordic-q-a/1811/delay-vs-event-driven-approach</link><description>I was wondering whether it is safe to use a delay (such as implemented by nrf_delay_us) to, for example, delay an iteration of a while loop, or that it is always better to use timers or an event-driven approach to achieve the same. On the one hand, I</description><dc:language>en-US</dc:language><generator>Telligent Community 13</generator><lastBuildDate>Fri, 22 Aug 2014 08:23:10 GMT</lastBuildDate><atom:link rel="self" type="application/rss+xml" href="https://devzone.nordicsemi.com/f/nordic-q-a/1811/delay-vs-event-driven-approach" /><item><title>RE: Delay vs. event-driven approach</title><link>https://devzone.nordicsemi.com/thread/7870?ContentTypeID=1</link><pubDate>Fri, 22 Aug 2014 08:23:10 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:c6679986-e88a-4e9e-8814-4640bf79903e</guid><dc:creator>Ricky</dc:creator><description>&lt;p&gt;good answer&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: Delay vs. event-driven approach</title><link>https://devzone.nordicsemi.com/thread/7869?ContentTypeID=1</link><pubDate>Thu, 06 Mar 2014 15:37:18 GMT</pubDate><guid isPermaLink="false">137ad170-7792-4731-bb38-c0d22fbe4515:ca91626b-6b06-43ee-8f0f-a5e7e6e673c1</guid><dc:creator>Ole Morten</dc:creator><description>&lt;p&gt;Using delays is not a good idea, as it will be very imprecise and extremely current intensive way of writing code for most CPUs. You don&amp;#39;t write which chip you&amp;#39;re working with, but for instance on the nRF51822, using a delay loop will consume ~4.5 mA of current, while starting an RTC timer (for instance using app_timer), and then go to sleep will consume ~3 µA. That&amp;#39;s a 1000-fold difference, which in most cases should be sufficient reason to use the timer approach.&lt;/p&gt;
&lt;p&gt;Additionally, any interrupts that happen while in a delay loop will significantly affect the timing. If you for instance have a softdevice interrupt, which have a max blocking time of several milliseconds, while the application hang in a delay-loop, the delay will actually be extended with the same amount of time. Also, the delay functions haven&amp;#39;t necessarily been made very accurate in the first place, and may in themselves introduce significant inaccuracy in sleep times.&lt;/p&gt;
&lt;p&gt;In summary, I&amp;#39;d strongly recommend you to not use delay loops in production code, but instead use the RTC timers for all time dependent tasks. Even though the numbers in the above are applicable for the nRF51822, the general arguments would be the same for most modern microcontrollers.&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item></channel></rss>