This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

nrf_delay uses NOP() but it's called in app_timer2.c and drv_rtc.c

Hi, 

I've read in these forums and seen the machine instructions within the C code that nrf_delay_us (which BTW is used to do nrf_delay_ms) is implemented by executing a series of nop instructions. This is a problem for handling interrupts from other modules promptly, and frankly I'm not really sure why it's so integrated in the library. 

In our project we are using app_timer version 2 which uses drv_rtc.c and nrf_delay module for Bluetooth. We are also using a separate timer and seeing that interrupt get effected by this. While the interrupt is supposed to trigger every 1 us, it actually only triggers every 11 us, I think because of this NOP() delay. 

My question is: What's the best way to get rid of the nop nrf_delay_us implementation? Do I overrride it? 

Thanks for the help

  • Hi,

    It is not possible to do anything sensible with interrupts every 1 us, and certainly not while doing much else. Can you explain in a bit more detail what you intend to do?

    Regarding NOP, app_timer use RTC. nrf_delay use busy waiting with NOP. That is OK for simple delays that need to be at least long enough, for instance during startup or similar. But they are blocking, not reliable in the sense that it will take longer time if an interrupt happens in the mean time, and if called in an interrupt, you will block any same or higher priority interrupts. So in short you should as a general rule not use nrf_delay (or other forms of busy waiting) other than in test code and some specific cases where you see that does not cause problems. 

    So for the specific problem:

    My question is: What's the best way to get rid of the nop nrf_delay_us implementation? Do I overrride it? 

    I do not see a reason for replacing thi sin nrf_delay. That is intended to be a simple busy wait function. If you want a different form of wait, you need to use a RTC (like app_timer) or timer, and wait for the interrupt.

  • So in short you should as a general rule not use nrf_delay (or other forms of busy waiting) other than in test code and some specific cases where you see that does not cause problems. 

    So if this is the guideline that Nordic is providing then my question is why is drv_rtc.c using nrf_delay? There's no way I can see to avoiding nrf_delay, even if you use the app_timer. 

    I want a one-time 5 us delay, not continually using 1us for the entirety of the application, and this should be possible with this processor, but the nop() instructions block the handling of that interrupt because it's integrated into drv_rtc.c and drv_rtc is integrated into app timer. 

    Regarding NOP, app_timer use RTC. nrf_delay use busy waiting with NOP. That is OK for simple delays that need to be at least long enough, for instance during startup or similar.

    I agree, So why is drv_rtc.c using it in the middle of the application and how do I avoid that?

  • I'd like to clarify my answer so you can see my problem:

    nrf_delay is used in the function drv_rtc.c::drv_rtc_windowed_compare_set. The function drv_rtc.c::drv_rtc_windowed_compare_set is used in app_timer2.c's rtc_schedule which is indirectly called from the interrupt handler! This makes nrf_delay transitively a major part of the application because it's being called every time the RTC is interrupted.

    Does that illustrate my issue with the driver? Everytime the RTC interrupts a bunch of nop instructions block any other potential interrupts (and the app timer relies on this interrupt)

  • Hi,

    nothingmuchyou said:
    So if this is the guideline that Nordic is providing then my question is why is drv_rtc.c using nrf_delay? There's no way I can see to avoiding nrf_delay, even if you use the app_timer. 

    I should have clarified that. The app_timer (wjhich drv_rtc.c is part of is based on the RTC using the 32.768 kHz clock). Given the low frequency of the clock you cannot wait very short times using it (not less than a RTC tick). So the app_timer implementation has a special case for handling when the timeout is 1 tick from now. There is no sensible way to implement this differently in the SDK. You could change this to use a TIMER instead, but that would requier a bit of changes. (And it is not acceptable in the SDK context, as the app_timer should not depend on other peripherals then the RTC, but you are of course free to do it if you want.)

    nothingmuchyou said:
    I want a one-time 5 us delay, not continually using 1us for the entirety of the application, and this should be possible with this processor, but the nop() instructions block the handling of that interrupt because it's integrated into drv_rtc.c and drv_rtc is integrated into app timer. 

    I don't understand how the preprocessor can help you with this. You need some way of keeping track of time. If you don't use CPU cycles (like nrf_delay), then you need to use either the RTC or TIMER.

    nothingmuchyou said:
    I agree, So why is drv_rtc.c using it in the middle of the application and how do I avoid that?

    There is no other sensible alternative in this case, as the app_timer is intended to be a RTC based low-power low frequency timer implementation.

    nothingmuchyou said:
    Does that illustrate my issue with the driver? Everytime the RTC interrupts a bunch of nop instructions block any other potential interrupts (and the app timer relies on this interrupt)

    I see your point. But I do not see a better general solution. In your specific use case it might be better to sip the app_timer altogether, and use a TIMER (which is 16 MHz) to keep track of time. Then you get the granularity you want and can avoid this. In short, you are using the app_timer outside of what it has been designed for.

  • Given the low frequency of the clock you cannot wait very short times using it (not less than a RTC tick). So the app_timer implementation has a special case for handling when the timeout is 1 tick from now

    I would argue that given the frequency of the RTC it shouldn't measure anything less than one period of its clock source. This nop implementation indicates that you should pick a time source for the app timer with more granularity if you anticipate that people will need it for that. 

    "I don't understand how the preprocessor can help you with this. You need some way of keeping track of time. If you don't use CPU cycles (like nrf_delay), then you need to use either the RTC or TIMER."

     I don't want the pre-processor to help me this, and I am using the TIMER peripheral for one 5us time period with a 16 MHz source (pretty reasonable). The problem is if the RTC has a time period less than its actual period those nop instructions block the processor and the interrupt is missed, so the next interrupt isn't until the timer count has rolled over again ~4.5 minutes later. I don't see how this nop implementation is acceptable at all. Use a faster clock if you need more time granularity. Don't do this. 

    I see your point. But I do not see a better general solution. In your specific use case it might be better to sip the app_timer altogether, and use a TIMER (which is 16 MHz) to keep track of time. Then you get the granularity you want and can avoid this. In short, you are using the app_timer outside of what it has been designed for.

    A better solution would be to use a fast clock source for your app timer, or to only use the app_timer for periods of time greater than or equal to the period of your RTC. I am NOT using the app_timer outside of what it has been designed for. The app_timer is used in almost every bluetooth example in the stack.

Related