This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Multiple peripherals drop the notification rate

Hello,

My setup is the following: Up to 4 peripherals (nrf52832) and the central device is a PC running Windows 10. To make it simpler, I'm just using 2 peripherals at the moment.

When I connect one peripheral I get notifications at a constant rate. If I disconnect this peripheral and connect another one it works the same way as the previous, as expected.

The issue I'm having is that when I connect two peripherals and subscribe to notifications to both of them. The last connected peripheral notifies at the expected rate, but the first one drops the rate significantly.

The packet I'm notifying is less than 50 bytes long and I am notifying at 100Hz. So I don't think I'm consuming all the bandwidth. Because when lowering the notification rate to 50Hz or less, I'm still seeing the first peripheral notifying at a much lower rate, let's say when notifying at 100Hz with one peripheral the other one is notifying at 16Hz. I've tried using a Bluetooth 4.2 dongle with no luck.

Does anybody know why does this happen? 

I've done research about this but I think I might have used the wrong keywords because I couldn't find anything and I doubt I've been the only one with this issue.

Thanks.

Parents
  • Hello,

    What are your connection parameters? To make things more complicated, it is the central that decides the connection interval, so you need to look at the application on the central side (windows 10 with Bluetooth dongle?).

    So you have 100 calls to sd_ble_gatts_hvx() every second to send packets? Perhaps you can try to store up more data, and send larger packets (packet size closer to the MTU size). This will reduce the header/payload ratio, giving you more payload throughput. 

    What you are probably seeing is that one of the connections is using up all the time. Is it always the same peripheral that takes all the bandwidth? Or does it depend on what device the computer (central) connects to first?

    You could try to reduce the connection event length. Set it to half the connection interval if it is higher than this by default.

    Does the second peripheral (the one with low bandwidth) also report that the throughput is low? Do you measure the amount of notifications that are queued with sd_ble_gatts_hvx() compared to the one that has all the throughput?

    Best regards,

    Edvin

  • Hi Edvin,

    I lowered both min connection interval and max connection interval to the same value of 8 * 1.25ms. Any value lower than that will make the firmware crash. With 8 * 1.25 ms the notification rate is working as expected at a rate of 100Hz each peripheral even when I'm having 4 of them connected!

    Now I'm going to research how to find the optimal values of min and max connection interval. But I'd be grateful if you could point me where to find this information.

  • Glad you found a working solution.

    It really isn't easy to say what the optimal connection interval is. It depends on the use case. A higher connection interval allows for a higher throughput (given that you use large MTU, large packets (fill the MTU), long connection event length. This way you get the lowest possible header/payload ratio.

    But increasing the connection interval will also increase the latency. Also, things get more complicated when you add more links to the mix, because one link can steal the throughput from another link. 

    If your central supports it, you can try to use 2MBPS instead of the standard 1MBPS. This increases the theoretical maximum payload throughput from ~750kbps to ~1.3 mbps.

    100Hz * 50bytes = 5 000bytes/s = 40kbps, so you are still nowhere near this limit (750kpbs @1MBPS). 

    Are you sure you are handling the application logic correctly when you are queuing packets (using sd_ble_gatts_hvx())? There shouldn't be any packet loss.

    Check if you are spending too long time in the events where you are receiving the packets. Try to do as little as possible. Only monitor the amount of events and the length of the packets, but don't do anything with the data. Just for debugging purposes it would be interesting to see whether this affects the behavior.

    Is it possible for me to reproduce the issue you are seeing using some DKs`?

Reply
  • Glad you found a working solution.

    It really isn't easy to say what the optimal connection interval is. It depends on the use case. A higher connection interval allows for a higher throughput (given that you use large MTU, large packets (fill the MTU), long connection event length. This way you get the lowest possible header/payload ratio.

    But increasing the connection interval will also increase the latency. Also, things get more complicated when you add more links to the mix, because one link can steal the throughput from another link. 

    If your central supports it, you can try to use 2MBPS instead of the standard 1MBPS. This increases the theoretical maximum payload throughput from ~750kbps to ~1.3 mbps.

    100Hz * 50bytes = 5 000bytes/s = 40kbps, so you are still nowhere near this limit (750kpbs @1MBPS). 

    Are you sure you are handling the application logic correctly when you are queuing packets (using sd_ble_gatts_hvx())? There shouldn't be any packet loss.

    Check if you are spending too long time in the events where you are receiving the packets. Try to do as little as possible. Only monitor the amount of events and the length of the packets, but don't do anything with the data. Just for debugging purposes it would be interesting to see whether this affects the behavior.

    Is it possible for me to reproduce the issue you are seeing using some DKs`?

Children
No Data
Related