This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

CPU overhead consumed by S110 SoftDevice

Hello,

I'm looking at nRF51822 as the BLE solution in a peripheral device which will have a number of different functions/features, some of which require a modest amount of near-real-time CPU hand-holding. Therefore I need to know more about how the S110 SoftDevice operates, and how much CPU overhead it consumes before I can make a decision and start spending resource to develop and verify functionality.

Are there any app notes or data available which provide information on S110 performance/behavior in the nRF51822? In particular I need to know:

  1. In a maximum data-transfer-rate scenario, where 20-byte notifications are sent at the minimum 7.5ms intervals continuously, what percentage of CPU bandwidth is consumed by the S110? Approximate answer is OK.

  2. During the time a notification is being sent (ie: while S110 is actively processing/sending the notification), does the S110 occupy the CPU 100% Or is it possible to have other interrupts serviced at certain points while the notification is being sent/processed?

Thanks very much for any information!

  • All of this should be detailed in the S110 SoftDevice Specification, which I'd recommend you to take a close look at.

    I don't have any ready-made numbers for the 7.5 ms and 1 packet per interval, but with 4 packets per interval and 7.5 ms, you should have about 20 % CPU time remaining. With 100 ms interval and 1 packet, you'll have about 98 % remaining. These numbers are from table 25, and as you can see, for the 7.5 ms / 4 packets scenario, most of the time is spent in CPU suspend, and I'd expect this to be reduced to roughly 1/4 if you transfer only one packet instead of 4. A very rough suggestion would therefore give 11 + 27 + 42/4 ~= 50 % used, and hence ~50 % free for the application for your use case. Be aware that many Centrals will however not allow using 7.5 ms connection intervals. You may also have use in taking a look at this question.

    The CPU will be suspended when there is radio activity, to avoid disrupting the radio in any way. As long as the CPU is suspended, no application level interrupts will be processed, and they may hence be delayed. The maximum delay is dependent on data throughput, and is shown in table 25 in the SDS. For 1 packet, it's ~1 ms, while for 6 packets, it can be almost 6 ms. However, if you have real-time tasks, that you can control when happens yourself, you should be able to schedule them in between the radio events, when no interrupts will occur. In these periods, you'll have the CPU for yourself, and there will not be any blocking interrupts.

  • Thank you very much Ole, this is exactly the kind of information I was looking for.

Related