This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Clarification on S110 SDS "Processor availability and interrupt latency"

I am currently using MBR, and know the general interrupt latency due to MBR + SD from Table 26. But I cannot understand how you arrived at those numbers. IF this table was referring to JUST the MBR caused latency, I think it makes sense--assuming that MBR acts deterministically for EVERY interrupt. But once SD is involved, I do not understand: wouldn't the SD caused interrupt latency change depending on whether there is an outstanding BLE activity? So are numbers in Table 26 just an average? If so, what is the maximum and minimum (therefore the jitter)?

Conversely, I understand that high priority app interrupt will interfere with the BLE upper stack performance. Is there existing doc explaining how long I can take in the high priority app interrupt before I cause a problem?

Thank you very much for reading.

  • That table is just telling you the added latency over and above the usual cycles required by the cortex M0 to take an exception due to those exceptions passing through the MBR and softdevice vector tables and tested to see if they are to be handled by the MBR or the SD before the user-space interrupt handler is called. That is deterministic because all it's doing is checking the interrupt number / SVC call number, either it processes it (and there's no call into the application) or it doesn't process it and the application handler is called as soon as that's determined.

    The other delays which come from the softdevice having higher interrupt priorities are described elsewhere (that table itself references section 12.2 where those are documented).

    So the latency in that table is just fixed overhead of having interrupts trampoline through two vector tables before it gets to you.

    There's no documented guidance about how long you can spend in app high interrupt context without disrupting BTLE communications. The lower stack, which runs at the highest priority of all will keep the link going, you can see that from the diagrams in section 12.3 and the text which goes with them. That should prevent any supervisory timeouts at least.

    The upper stack processes the GATT, ATT and SMP operations, so that would be sending out read values, committing write values and sending confirmations back for operations which require them. So your theoretical limit there is the timeouts for those upper-level protocols, which are generally long. Obviously if you have something very long-running it's better to get it out of the interrupt handler and process it later, but the stack is pretty robust to having upper stack processing blocked for a while, although you will degrade throughput.

  • Thank you for the explanation RK. I am advocating in my team to get rid of the MBR, but don't have the number to use in my argument. I expect a significant reduction in latency clock cycles by getting rid of MBR, but have not yet found another table that shows the latency with just SD alone. Any idea where I might find it?

  • Hi Henry, Getting rid of the MBR will only reduce the interrupt latency by parts of what is listed in Table 26, i.e. less than 4 us. If you compare this to the total interrupt latencies as listed in the tables in section 12.3, you will see that the decrease in latency from getting rid of the MBR is negligible.

Related