This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Multiple peripherals drop the notification rate

Hello,

My setup is the following: Up to 4 peripherals (nrf52832) and the central device is a PC running Windows 10. To make it simpler, I'm just using 2 peripherals at the moment.

When I connect one peripheral I get notifications at a constant rate. If I disconnect this peripheral and connect another one it works the same way as the previous, as expected.

The issue I'm having is that when I connect two peripherals and subscribe to notifications to both of them. The last connected peripheral notifies at the expected rate, but the first one drops the rate significantly.

The packet I'm notifying is less than 50 bytes long and I am notifying at 100Hz. So I don't think I'm consuming all the bandwidth. Because when lowering the notification rate to 50Hz or less, I'm still seeing the first peripheral notifying at a much lower rate, let's say when notifying at 100Hz with one peripheral the other one is notifying at 16Hz. I've tried using a Bluetooth 4.2 dongle with no luck.

Does anybody know why does this happen? 

I've done research about this but I think I might have used the wrong keywords because I couldn't find anything and I doubt I've been the only one with this issue.

Thanks.

Parents
  • Hello,

    What are your connection parameters? To make things more complicated, it is the central that decides the connection interval, so you need to look at the application on the central side (windows 10 with Bluetooth dongle?).

    So you have 100 calls to sd_ble_gatts_hvx() every second to send packets? Perhaps you can try to store up more data, and send larger packets (packet size closer to the MTU size). This will reduce the header/payload ratio, giving you more payload throughput. 

    What you are probably seeing is that one of the connections is using up all the time. Is it always the same peripheral that takes all the bandwidth? Or does it depend on what device the computer (central) connects to first?

    You could try to reduce the connection event length. Set it to half the connection interval if it is higher than this by default.

    Does the second peripheral (the one with low bandwidth) also report that the throughput is low? Do you measure the amount of notifications that are queued with sd_ble_gatts_hvx() compared to the one that has all the throughput?

    Best regards,

    Edvin

Reply
  • Hello,

    What are your connection parameters? To make things more complicated, it is the central that decides the connection interval, so you need to look at the application on the central side (windows 10 with Bluetooth dongle?).

    So you have 100 calls to sd_ble_gatts_hvx() every second to send packets? Perhaps you can try to store up more data, and send larger packets (packet size closer to the MTU size). This will reduce the header/payload ratio, giving you more payload throughput. 

    What you are probably seeing is that one of the connections is using up all the time. Is it always the same peripheral that takes all the bandwidth? Or does it depend on what device the computer (central) connects to first?

    You could try to reduce the connection event length. Set it to half the connection interval if it is higher than this by default.

    Does the second peripheral (the one with low bandwidth) also report that the throughput is low? Do you measure the amount of notifications that are queued with sd_ble_gatts_hvx() compared to the one that has all the throughput?

    Best regards,

    Edvin

Children
  • Hello Edvin,

    Thanks for your reply. I'm still fairly new to this.

    My MIN_CONN_INTERVAL is set to 20ms and the MAX_CONN_INTERVAL is set to 200ms. I've read information in the forums about this, but I'm still not sure how do I find the optimal values for my application. Because if I remember correctly, I once lowered the MIN_CONN_INTERVAL to 7.5ms as some other posts in the forum did but I couldn't get the code to run. I will now reduce the MIN_CONN_INTERVAL and check it out again.

    Actually I do 50 calls to sd_ble_gatts_hvx() a second and at every packet I double the information I want to send, so I'm faking the 100Hz rate. Because when I tried to send 22 byte (MTU was 25) packets at 100Hz I got the busy error and never got anywhere close to 100Hz I got 66Hz at max. I always set the NRF_SDH_BLE_GATT_MAX_MTU_SIZE to the size of this data packet I'm sending + 3 (because I read it somewhere on this forum).

    It's always the first one to connect to the central that lowers the rate, I've tried with several peripherals and it always happened with the first one to connect.

    I don't know how to measure the queued notifications, I'm going to search how to do it. I calculate the frequency of notifications I'm getting in the central device by counting the time it has passed and how many notifications did it get.

    Best regards,
    Pau

  • Hello again,

    I lowered the MIN_CONN_INTERVAL to 10ms but the issue is still happening, I'm seeing the first sensor at 15Hz and the second at 85Hz.

    Best regards,
    Pau

  • Hi Edvin,

    I lowered both min connection interval and max connection interval to the same value of 8 * 1.25ms. Any value lower than that will make the firmware crash. With 8 * 1.25 ms the notification rate is working as expected at a rate of 100Hz each peripheral even when I'm having 4 of them connected!

    Now I'm going to research how to find the optimal values of min and max connection interval. But I'd be grateful if you could point me where to find this information.

  • Glad you found a working solution.

    It really isn't easy to say what the optimal connection interval is. It depends on the use case. A higher connection interval allows for a higher throughput (given that you use large MTU, large packets (fill the MTU), long connection event length. This way you get the lowest possible header/payload ratio.

    But increasing the connection interval will also increase the latency. Also, things get more complicated when you add more links to the mix, because one link can steal the throughput from another link. 

    If your central supports it, you can try to use 2MBPS instead of the standard 1MBPS. This increases the theoretical maximum payload throughput from ~750kbps to ~1.3 mbps.

    100Hz * 50bytes = 5 000bytes/s = 40kbps, so you are still nowhere near this limit (750kpbs @1MBPS). 

    Are you sure you are handling the application logic correctly when you are queuing packets (using sd_ble_gatts_hvx())? There shouldn't be any packet loss.

    Check if you are spending too long time in the events where you are receiving the packets. Try to do as little as possible. Only monitor the amount of events and the length of the packets, but don't do anything with the data. Just for debugging purposes it would be interesting to see whether this affects the behavior.

    Is it possible for me to reproduce the issue you are seeing using some DKs`?

Related