This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

BLE Protocol for one peripheral connected to multiple centrals

Hi, we have a project that we're currently working on, and we're looking for some information on BLE behaviour that would be very helpful. We are currently using SDK 15.3 and softdevice s132 v6.1.1.

We are looking to implement a setup where we would have our nRF52832 in a peripheral role connected to multiple mobile devices (iOS and Android) with central roles. The main goal of the wireless communication is to have the nRF chip send one notification (containing a single 8-bit value) to each connected device once per connection interval. We would also like to be able to send as many notifies as possible to each connected device (ideally ~20/sec each), i.e. have as small a connection interval as possible. Before trying to work out the timing / BLE scheduling, there are a few questions that I'm unsure about:

  1. Since it is the peripheral doing all of the communication (via notifications), will the softdevice be able to set up the most efficient scheduling to get the notifies out as quickly as possible?
  2. With the peripheral be able to dictate the connection intervals, or will that be determined by the individual centrals?
  3. I read in the s132 documentation that a notify takes 2.5ms to transmit. Is that correct? If so, does that mean that if the connection interval is 10 ms for 4 connected devices, the peripheral would be able to send out one notify to each device?
  4. In order to minimize the connection interval (to send as many notifies as possible), is it possible to adjust the connection interval depending on many devices are connected to the peripheral (i.e. adjust the value each time a new device is connected)? Or is it better to have a set value to be able to accommodate the max number of connected devices?
  5. In a situation where there are connected devices receiving notifies from the peripheral and then a new central initiates a new connection, will the notifies stop being sent until that new connection is established?
  6. Along the same line, if we would like the centrals to be able to send messages (writes) to the peripheral, will that interfere with the notifies that the peripheral is trying to send? i.e. Will the writes collide with the notifies and take priority?

I apologize for all of the questions, but any answers to them would be much appreciated! Thanks!

Parents
  • Hi Adam

    1. It is the central that is the link layer master, and decides the timing of each connection. The peripheral can request different connection parameters by issuing an connection parameter update request, but the the central will always have the final say. 

    The notification latency is primarily decided by the connection interval, which can be anywhere in the 7.5ms to 4s range, and the peripheral should request a range of values suitable to the use case (lower values give lower latency but also higher average current consumption). 

    An nRF52 based central will normally allow the shortest connection interval (7.5ms), but as Darren mentioned this is usually not the case for mobile phones. 

    2. As mentioned above the central decides the connection parameters, but the peripheral can request a change. In our examples you can have the peripheral disconnect if it can't get the right connection parameters, but this can be changed so that it accepts what it gets instead. 

    3. Do you have a link to this comment?
    It doesn't take a full 2.5ms to send a single notification, but 2.5ms is the minimum time needed to serve a single connection event. This means that if your connection interval divided by the number of connections is smaller than 2.5ms you won't be able to serve every connection for each interval, which will increase the latency. 

    4. I think it's better to request the same connection interval regardless of the number of connected devices, as there is quite a bit of overhead involved when changing this. The SoftDevice has a system for handling scheduling conflicts when a device is connected to multiple different centrals. 

    5. No, you can keep sending notifications while establishing a link to a new central, but if the timing of the new connection conflicts with the timing of one of the current connections you might see delays. 

    6. No. Every connection event starts with a packet from the central to the peripheral, followed by a response packet from the peripheral to the central. The peripheral will never send anything without receiving the packet from the central first. 

    Queued writes on the central side will be included in the first packet, while queued notifications in the peripheral would be included in the second packet. 

    Best regards
    Torbjørn

  • Hi ovrebekk,

    Thank you very much for your thorough reply! I appreciate you taking the time to go through this with me.

    1. When requesting the min and max connection intervals, is there anything holding us back from requesting a range that has min = max? Do we run the risk of not being able to establish the connection? i.e. it's better to give the phone a realistic range?

    Also, our board is plugged into power, so fortunately we don't need to be concerned with high current consumption.

    3. I was researching this information here. It didn't explicitly state timing for a notify, but I tried inferring it from available information. Table 23 (on page 55) had timing information for tprep and tp (which had a max value around 1700 us), and I also read elsewhere that connection interval were set in multiples of 1.25ms (so the next multiple would be 2.5ms), and then finally on page 81, section 15.10 talked about event lengths, and none were shorter than 2.5ms.

    But thank you for confirming that 2.5 ms is that value for minimum time needed. I understand that if connection interval divided by number of connections is less than 2.5ms, then we can't serve every connection, but if it is equal to 2.5ms, does that necessarily mean that we can? i.e. do we need to give the system some leeway above the 2.5ms mark in order to be able to serve each connection? If so, is there a specific value on that?

    4. Thanks for the info, I agree with you that we should start with set value for requested connection interval. Is it safe to assume that if we take the maximum number of devices we would like to be able to connect to, multiply that by 2.5ms (plus potential overhead from question 3), and set that equal to our maximum requested connection interval, we should have proper scheduling set up to be able to send one notification to each of our connected devices per connection interval?

    5. So there is a chance that our peripheral might be able to get some notifications out while establishing a link to a new central, but there is no guarantee? i.e. it's possible the new connection conflicts with all of the outbound notifications and the notifications will only be sent once the link is established?

    6. I was under the impression that sending a notify did not require initiation from the central and is instead pushed to the central by the peripheral.

    So if we had our scheduling set up with out connection interval filled with notification messages being sent to all connected devices, and then one of the centrals decided to write to the peripheral, that write collides with the notify and the overlapping notifications would not be sent out?

    Thanks again for all of your help!

    Adam

Reply
  • Hi ovrebekk,

    Thank you very much for your thorough reply! I appreciate you taking the time to go through this with me.

    1. When requesting the min and max connection intervals, is there anything holding us back from requesting a range that has min = max? Do we run the risk of not being able to establish the connection? i.e. it's better to give the phone a realistic range?

    Also, our board is plugged into power, so fortunately we don't need to be concerned with high current consumption.

    3. I was researching this information here. It didn't explicitly state timing for a notify, but I tried inferring it from available information. Table 23 (on page 55) had timing information for tprep and tp (which had a max value around 1700 us), and I also read elsewhere that connection interval were set in multiples of 1.25ms (so the next multiple would be 2.5ms), and then finally on page 81, section 15.10 talked about event lengths, and none were shorter than 2.5ms.

    But thank you for confirming that 2.5 ms is that value for minimum time needed. I understand that if connection interval divided by number of connections is less than 2.5ms, then we can't serve every connection, but if it is equal to 2.5ms, does that necessarily mean that we can? i.e. do we need to give the system some leeway above the 2.5ms mark in order to be able to serve each connection? If so, is there a specific value on that?

    4. Thanks for the info, I agree with you that we should start with set value for requested connection interval. Is it safe to assume that if we take the maximum number of devices we would like to be able to connect to, multiply that by 2.5ms (plus potential overhead from question 3), and set that equal to our maximum requested connection interval, we should have proper scheduling set up to be able to send one notification to each of our connected devices per connection interval?

    5. So there is a chance that our peripheral might be able to get some notifications out while establishing a link to a new central, but there is no guarantee? i.e. it's possible the new connection conflicts with all of the outbound notifications and the notifications will only be sent once the link is established?

    6. I was under the impression that sending a notify did not require initiation from the central and is instead pushed to the central by the peripheral.

    So if we had our scheduling set up with out connection interval filled with notification messages being sent to all connected devices, and then one of the centrals decided to write to the peripheral, that write collides with the notify and the overlapping notifications would not be sent out?

    Thanks again for all of your help!

    Adam

Children
  • Hi Adam

    1. Setting min = max is allowed, and is the best way if you want to use a specific connection interval. You could argue there is a larger chance the central will ignore the request if you do this, but it shouldn't disconnect. 

    A common compromise between requesting a narrow range or a broader one is to first request the connection interval that you want with min = max, and then send a new request with a broader range if the first request is ignored. 

    3. How long it takes to send a packet (notification or not) depends mainly on the size of the payload. The largest packet you can send contains 244 bytes of data plus around 20 bytes of overhead, which would take about 2.1ms to send. If you only send 20 bytes then you are looking at around 320us of time. 

    The total time also needs to include the poll packet (80us when there is no data included) from the central and the 150 packet to packet time, which gets you pretty close to 2.5ms in the worst case. 

    Adam Gordon said:
    But thank you for confirming that 2.5 ms is that value for minimum time needed. I understand that if connection interval divided by number of connections is less than 2.5ms, then we can't serve every connection,

    You won't be able to serve every connection on every interval. The connections will still be running, but latency will increase because many of the connection events will be lost because of scheduling conflicts. 

    When a Nordic device is the central this will be very predictable, and if there are 2 links occurring within the same 2.5ms interval they will both get every other packet through. 

    When the Nordic device is the peripheral this will be a lot less predictable, since the timing off the different centrals will drift independently of each other.

    Adam Gordon said:
    do we need to give the system some leeway above the 2.5ms mark in order to be able to serve each connection? If so, is there a specific value on that?

    The more leeway you give the lower the packet loss will be, but the best case latency will increase since you use a larger connection interval. 

    Unfortunately I can't predict what the optimal number would be. I would suggest you start testing without any leeway beyond 2.5ms pr connection, and try with larger connection intervals if you see poor performance. 

    4. I wish it was this simple, but when you're running multiple peripheral connections it isn't. As I explained earlier the different centrals (phones) does not have synchronized clocks and will drift in time continuously. This means the various links will go in and out of sync over time, and will occasionally conflict with each other. 

    The SoftDevice has a very robust scheduler to handle these conflicts to avoid link loss, but every time a conflict occurs one or two of the affected links will get its data delayed. 

    If you could switch roles and have the nRF52 be the central and the phones be peripherals then all the timing would be controlled by the nRF52. In this case you would have much more predictable behavior, but be aware that many older phones don't support the peripheral role. 

    5. It is possible that the notifications will be delayed because of the connection establishment, yes, but it is extremely unlikely they will be delayed for the entire connection establishment procedure (this usually takes several seconds). 

    The first packet of the new connection will have highest priority, since the entire connection depends on it, but beyond this first packet the new connection will have the same priority as all the others. 

    6. 

    Adam Gordon said:
    I was under the impression that sending a notify did not require initiation from the central and is instead pushed to the central by the peripheral.

     If you look at it from the host layer of the BLE stack this is correct. The central host doesn't have to do anything, the notification is initiated by the host in the peripheral. 

    The link layer in the central device still needs to send it's normal poll packet to maintain the link. In a BLE connection the link layer master (central) will always talk first, that is why it is the master ;)

    Adam Gordon said:
    So if we had our scheduling set up with out connection interval filled with notification messages being sent to all connected devices, and then one of the centrals decided to write to the peripheral, that write collides with the notify and the overlapping notifications would not be sent out?

    No, it doesn't. The peripheral would send the notification after receiving the write from the central. 

    Best regards
    Torbjørn

  • Hi Torbjørn,

    Thanks again for your detailed reply! It is definitely helping with our understanding!

    3. Since our payload is only 1 byte of data, I guess we can assume that transmission of the notification would take somewhat less than 2.5 ms? If we say (conservatively) it takes ~350us to transmit 20 bytes of overhead + 1 byte data, plus another 80us for poll packet, plus 150us for packet to packet time, that's approximately 580us for the transmission of 1 notify. That should, in theory, give us a lot more headroom to play with to avoid packet loss / latency.

    4. Unfortunately with our setup it is not possible to switch the roles of the devices. I agree that would be ideal for scheduling (although not for our setup :P)

    5/6. So it is safe to assume that notifications will still be sent out throughout the connection establishment process, and also while other devices may be writing to the peripheral, however there may likely be some delays in them being sent. Is that fair to assume?

    So as I understand it from this discussion, I think I will take the following approach while trying to achieve our goal of each connected device receiving ~20 notifications / second:

    • Assume ~2.5ms per notify (potentially less depending on question 3)
    • Start with a max number of connected devices (ex. 4)
    • Determine ideal connection interval for max number of devices (ex. 4 * 2.5ms = 10ms)
    • Request ideal connection interval (min=max) from each connected device (connect to max number of devices)
    • Measure the number of notifies being received by each mobile device
      • If the number of notifies is too low, consider increasing the connection interval
      • If the number of notifies is high enough, great! Consider increasing the max number of connected devices.
      • Repeat step until desired number is achieved.
    • Add in writes from centrals to peripheral and repeat previous step to determine effect on notification rate.
    • Be sure to measure notification rate while new devices are establishing connections.

    Does that seem to be the best approach to take since we unfortunately don't have much control over scheduling as the peripheral?

    Thanks again for your help!

    Adam

  • Hi Adam

    3. Around 5-600us sounds realistic, yes, but I am not sure it makes a big difference when it comes to latency. The SoftDevice will still reserve the full 2.5ms for that connection, and use that as a reference when scheduling multiple links (after all the scheduler can't know the size of the TX and RX packets ahead of time). 

    5/6. If the first packet of a new connection conflicts with an existing connection then the existing connection will lose a connection event, yes, leading to a delayed notification packet. 

    If the central writes to a peripheral the only delay will be the time it takes to send the data packet relative to sending an empty packet, which should be a very small difference (8 us per additional byte). 

    I think the procedure you describe to test this out sounds reasonable. If you simply push notifications as quickly as you can from the peripheral side, and add some code on the app side to measure the number of received notifications, then you should be able to benchmark the performance for different connection intervals and under different scenarios. 

    It is worth mentioning that you might be able to send multiple notifications during a single 2.5ms connection event if you are only sending short packets. This should help you catch up whenever a connection loses a connection event because of scheduling conflicts etc. 

    Best regards
    Torbjørn

  • Hi Torbjørn,

    Makes perfect sense! Thanks again for all of your responses and information!

    Regards,

    Adam

  • Hi Adam

    I'm happy to help. The best of luck with your project Slight smile

    Best regards
    Torbjørn

Related