This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Minimize blocking time on Rev 3 nRF51822

I've been reading about minimizing the amount of time that the SoftDevice blocks the CPU for, but most of the answers have to do with old revs of the chip and old versions of the stack, and recommend calling an API function which blocks the CPU less. This question is instead about what I can do to prevent hitting the maximum values listed in the SoftDevice spec.

The SoftDevice spec v2.0 says that the "Interrupt latency at the end of an connection event" can be a max of 510μs, and that the "Interrupt latency at the end of an advertising event" can be a max of 440μs. Other events, such as sending and receiving packets, can also block for a couple hundred microseconds.

This answer says that the max blocking time in general is 250μs, while this answer clarifies that the 250μs is the normal worst case with a theoretical worst case of the 510μs mentioned in the SD spec.

My question is: how can I avoid this theoretical worst case, and even the "most case" maximum, to get down to the average latency of 80μs mentioned in that second link? I'm trying to service the ADC interrupt as fast as possible to run a control loop, and 80μs is pretty doable whereas 510μs will get pretty dicey.

Thanks!

  • The short answer is to try to send as little data as possible ;)

    The worst case numbers in the SDS typically occur if you are sending 6 packets per connection event (which is the maximum for the S110). If you don't send anything, or only send a single packet per connection event, the interrupt times will be much shorter.

    Exactly how long the interrupts are depends on many factors, and isn't specified in detail, but if you have a scope available you could do some measurements on your own. Simply use the old trick where you toggle a GPIO quickly in the main loop, and observe on the scope how long the pauses in the toggling is.

    Regards
    Torbjørn

  • Luckily I don't have that much data to send, so hopefully I'll be good on that front!

    I was mostly curious because the max time is highest at the end of a connection event. It wasn't clear to me if that time changed based on how much data was sent (vs. e.g. the 290μs after sending a packet). So if I advertise with very little data and send very little during a connection, all of my times will stay far below max?

    I may just have to try the GPIO toggling, unless I can figure out some way to gate a PWM using the PPI. Turns out you can't have two different GPIOTE blocks controlling the same output pin??

    Thanks!

  • "So if I advertise with very little data and send very little during a connection, all of my times will stay far below max?"

    Yes, I would expect so.

    You can not have 2 GPIOTE channels controlling the same output pin, that is correct. We fixed this problem in the nRF52 series by introducing separate SET and CLEAR tasks for each GPIOTE channel.

Related