This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

SoftDevice configuration for greater throughput using notification and multiple connections

Hi!

We are using the nRF52840 and nRF52832 for a product for which good data throughput over BLE is required. (We have picked the S140 and S132).

We send these data using notification only (max 20 byets). (It is a two way data transfer)

We try not to use the ATT_MTU negotiation/feature or the DLE, since we aim for a solution that will work equally well for all devices regardless of stack implementation and DLE support.

  • we might be supporting old Androids and iOSs in the future that do not support these.

I have read that it is possible for a peer (both central and peripheral) to transmit several notifications during a single connection interval.

I have several questions regarding this behavior and for the correct configuration of the SoftDevice. (We will be using S140 and S132).

  • I have read that I must change the amount of BLE packet buffers in the stack to allow greater buffering of packet in stack, and thereby use of this feature.
    • Where can I find this config setting?
    • Is it possible that it is not included in my sdk_config.h file?
    • Can it be one of these?
      • NRF_SDH_BLE_GAP_DATA_LENGTH
      • NRF_SDH_BLE_GAP_EVENT_LENGTH
      • NRF_SDH_BLE_GATT_MAX_MTU_SIZE
  • We will be having several connections ongoing most of the time. How will the stack (softdevice) handle this situation?
    • Can the packets be send to the stack randomly by the different underlying services that send notification packets?
      • Or do we have to burst transmit like this:
        • (peer_1, peer_1, peer_1, peer_1, peer_1,    peer_2, peer_2, peer_2, peer_2, peer_2,    peer_1, peer_1, peer_1, peer_1, peer_1,    ...)
  • What will happen if all the buffered packet in stack is for peer_2, but the next upcoming connection event is for peer_1? (and we have packets to tx for both peers, but happen to buffer the wrong packets in the stack)
    • I expect that the stack wont' be able to send any packets!
      • Could the solution be just to avoid filling the stack buffers with the packets of just one connection.
  • How is this different for a peripheral relative to a central device?

    Our final goal is to send several notifications (6 or higher) in both directions (or just one direction) during a single connection interval with each notification being max 20 bytes in length.

    I understand that indications cannot be buffered like this and needs to be send one-by-one for every other connection interval.

    I have read this, but can't tell how to set the buffer size for how many packets the softdevice should be able to accept.

    https://devzone.nordicsemi.com/f/nordic-q-a/34192/how-to-calculate-nrf_sdh_ble_gap_event_length-in-sdk15-for-determinded-nrf_sdh_ble_gatt_max_mtu_size

    https://devzone.nordicsemi.com/f/nordic-q-a/32581/questions-regarding-ble-throughput

    Great thanks. Your time is highly appreciated.

    Best regards

    • Hi

      First off I would say that you should expect a big hit to performance if you don't use DLE. Most modern phones support this, and you might want to do some testing to verify if it's worth it to lose that speed for a more consistent experience. The SoftDevice should revert to not using DLE if the peer doesn't support it, and a completely consistent experience won't happen even if DLE is disabled as some phones will handle more packets than others. 

      When it comes to the number of packets per CI supported by the SoftDevice this has changed quite a bit in recent versions. While we originally had a fixed set of buffers determining the maximum number of packets you are now able to upload data continuously during the connection event, essentially removing any limit on how many packets you can send. The only limiting factor now is the event length, set by the NRF_SDH_BLE_GAP_EVENT_LENGTH define in sdk_config.h, and whatever limit there is in the peer. 

      If you only send 20 byte packets then an event length of 3 (leading to maximum event duration of 3.75ms) should be sufficient for at least 6 packets in one direction.  

      The general recommendation for maximizing data throughput is to upload as many packets as possible until the NRF_ERROR_RESOURCES error is returned, wait for the BLE_GATTS_EVT_HVN_TX_COMPLETE event to occur, and then repeat the process. 

      This doesn't really change when running multiple links. For example you can start by maxing out the buffers for peer_1, and then start sending packets to peer_2 once the NRF_ERROR_RESOURCES error occurs for peer_1. 

      Best regards
      Torbjørn

    • Hi

      Thank you for the quick response. Highly appreciated.

      Regarding the DLE, I totally agree with you and this is considered for the optimization phase. Our issue is that when DLE is enabled in SoftDevice, we have previously had crashes when pairing with Samsung Android 7.0 devices which required us to disable the DLE as a workaround, due to a Samsung issue. I'm not sure if this is true for the latest SoftDevices.

      The only limiting factor now is the event length, set by the NRF_SDH_BLE_GAP_EVENT_LENGTH define in sdk_config.h, and whatever limit there is in the peer. 

      If you only send 20 byte packets then an event length of 3 (leading to maximum event duration of 3.75ms) should be sufficient for at least 6 packets in one direction.  

      This is very good. But how does it interplay with the connection interval settings.

      • I mean, if the GAP_EVENT_LENGTH is long compared to the connection interval.
        • What is the expected outcome?
          • Shortened GAP_EVENT_LENGTH, possibly?
      While we originally had a fixed set of buffers determining the maximum number of packets you are now able to upload data continuously during the connection event,

      We can't assume to make it in time and not mis the CI due to context switch, even with highest priority on the thread pushing packets to stack. The idea is to push a predetermined amount of packets to stack and not react 'critically' to the ble events and only push in burst mode before the stack buffer is completely transmitted.

      The general recommendation for maximizing data throughput is to upload as many packets as possible until the NRF_ERROR_RESOURCES error is returned, wait for the BLE_GATTS_EVT_HVN_TX_COMPLETE event to occur, and then repeat the process.

      And this is the core of my question. I understand from your answer that you have removed the limitation on how many packets we can send to stack by enabling on-the-fly 'push to stack' during a connection interval. I would like to extend and control the amount of packets the stack can accept before returning NRF_ERROR_RESOURCES, and thereby not rely on the context switch speed when receiving the BLE_GATTS_EVT_HVN_TX_COMPLETE. We would like to reduce this racing condition and do burst 'pushes' to stack if possible.

      • I am looking for the config option that sets the size of the internal stack buffer.
      This doesn't really change when running multiple links. For example you can start by maxing out the buffers for peer_1, and then start sending packets to peer_2 once the NRF_ERROR_RESOURCES error occurs for peer_1. 

      I think I understand! But to verify:

      Assuming that current CI is for peer_1: I understand this as, the stack is not searching the stack buffer array for more packet destined for peer_1, but transmits the packets in the order as they were pushed to the stack. This makes the stack wait until the CI for peer_2 begins before doing yet another TX if the next packet is for peer_2 while still having packets in queue for peer_1. Correct?

      • You suggests that we can optimize this by sending packets belonging to the same peer in a row as a workaround. (So that they are sorted in the stack buffer) ?
        • If correctly understood, this seems reasonable.

      Thanks

    • Hi Saeed

      Saeed Ghasemi said:

      This is very good. But how does it interplay with the connection interval settings.

      • I mean, if the GAP_EVENT_LENGTH is long compared to the connection interval.
        • What is the expected outcome?
          • Shortened GAP_EVENT_LENGTH, possibly?

      If you only have one link this is not a problem. The GAP_EVENT_LENGTH will be capped to the length of the connection interval (CI). 

      If you have multiple links and you want to ensure that all links get a similar throughput you don't want the GAP_EVENT_LENGTH to be larger than the connection interval divided by the number of links. 

      As an example, say you have a CI of 10ms and two links you should set the GAP_EVENT_LENGTH to 4, which means each link gets 5ms (4*1.25ms). 

      Saeed Ghasemi said:

      And this is the core of my question. I understand from your answer that you have removed the limitation on how many packets we can send to stack by enabling on-the-fly 'push to stack' during a connection interval. I would like to extend and control the amount of packets the stack can accept before returning NRF_ERROR_RESOURCES, and thereby not rely on the context switch speed when receiving the BLE_GATTS_EVT_HVN_TX_COMPLETE. We would like to reduce this racing condition and do burst 'pushes' to stack if possible.

      • I am looking for the config option that sets the size of the internal stack buffer.

      Discussing this with the SoftDevice guys it appears the internal buffer will be sized according to the event length that you set, to ensure that you can preload all the packets for the next connection event. In other words you are not required to upload packets during the event to maximize the throughput. 

       

      Saeed Ghasemi said:

      Assuming that current CI is for peer_1: I understand this as, the stack is not searching the stack buffer array for more packet destined for peer_1, but transmits the packets in the order as they were pushed to the stack. This makes the stack wait until the CI for peer_2 begins before doing yet another TX if the next packet is for peer_2 while still having packets in queue for peer_1. Correct?

      • You suggests that we can optimize this by sending packets belonging to the same peer in a row as a workaround. (So that they are sorted in the stack buffer) ?
        • If correctly understood, this seems reasonable.

      I am not quite sure I understand the question, but each connection will have a separate packet queue/buffer. At what time you upload packets to the buffer for peer_1 and peer_2 doesn't really matter, as long as you fill each buffer before the corresponding connection event occurs. 

      Since the connection events for peer_1 and peer_2 will occur in a round robin fashion, your packet upload code will probably do the same. 

      Best regards
      Torbjørn

    • Hi Torbjörn!

      This is all good news! I got all I was looking for.

      Thank you very much for your time.

      Kind regards

      Saeed Ghasemi

    Related