This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Connection Parameter Selection for Central Device with 10 Peripheral Simultaneous Connections

Hi,

I am developing both Central and Peripheral devices with nRF52840. For consistency, both are using SDK 15.3.0 and S140 V7.

I am able to connect to peripherals and negotiate MTU ATT size of 247 and I am able to negotiate 2 Mbps phy. Logging on both peripheral and central devices confirm this is working.

My challenge is getting notification data from my peripherals to my central. If I set up (for example) 2 peripherals to notify my central device every 500 mSec with approx 160 bytes of data, all data comes into the BLE event handler (BLE_GATTC_EVT_HVX) within my central device just fine. If I double the data speed on peripherals (ie. 160 bytes every 250 mSec from each of the 2 devices), I only get BLE_GATTC_EVT_HVX events notifying me of incoming data from the first peripheral only.

I believe that I am not getting correct connection service intervals setup after peripheral connection.

For a scenario where a central is talking to 10 peripherals and getting notification data from each every 250 mSec, what would good connection interval, slave latency values be? I cannot seem to find a good reference for setup of connection intervals for multiple simultaneous peripheral connections to a central device - where each peripheral is notifying the central independently.

Note that my connection event length is 6 x 1.25 mSec or 7.5 mSec. This should be plenty long enough to get an MTU of 247 bytes transferred.

I have been using default min and max connection intervals of 7.5 and 30 mSec respectively and a slave latency of 0 (for both central and peripheral devices).

Suggestions for my scenario would be much appreciated.

Thanks in advance,

Mark J

Parents
  • Hello Mark,

    Sorry for the late reply. We are quite short staffed on support due to summer holidays. I am sorry, I didn't have time to reply before the weekend. Answering only your first post first:

    Are you sure that you are only receiving data from one peripheral connection? Perhaps you are spending too much time in the interrupt from the first device, so that the second device is not handling. I assume (based on your case history) that you are using the Nordic Uart Service (NUS). Do you print all the incoming data on the UART? What happens if you only try to leave the interrupt immediately, and just print something like: NRF_LOG_INFO("notification received from conn_handle %02x", conn_handle) whenever you receive data? Do you get both devices then?

  • Hi 

    As per my last entry: Increase Supported Links to Peripherals on a Central Device

    ... I took out NUS as it would not work in providing me a proper reference. I am now using direct connection handles and this is working better in that I always seem to get the correct reference to the connected peripheral sending me notification data.

    I am leaving the interrupt quickly and running my code on the main thread using a scheduler.  Here is my main loop:

     // Enter main loop.
      while (true)
      {
        // Proc USB events.
        while (app_usbd_event_queue_process()) { /* Nothing to do */  }
    
        // if(tempCnt++ >= 300000) { SEGGER_RTT_printf(0, "X\n"); tempCnt = 0;}
    
        // Handled scheduler actions.
        app_sched_execute();
    
        // Manage power.
        // nrf_pwr_mgmt_run(); 
      }

    By printing "x" I can see that my loop iterates < 10 uSec, so I am leaving the interrupt quickly and able to service the scheduler quickly on the main thread. I suspected that scheduling delay and handler execution time (ie. too long in a handler) may be the issue, but I am quite convinced it is related to the radio. More in my next reply below.

    IMPORTANTLY - I am not interested in scanning when transferring data. I am only scanning to setup connections. I scan, stop scanning, connect to my 10 peripherals and then I request that they notify me every 250 mSec. I have to setup a ble_gap_scan_params_t* to pass to sd_ble_gap_connect() - when connecting to each peripheral. I have been using the same scan interval and window within this ble_gap_scan_params_t object. Maybe that is my issue? If I want no scanning after connection, can I set this ble_gap_scan_params_t* to null or some other value that drastically or totally eliminates scanning after connections?

    Regards,

    Mark J 

  • BLE is a lossless protocol (ensured by ACKing and retransmission), so scanning shouldn't affect this. As long as you are not disconnected, all packet should arrive eventually, and in the same order as you send them.

    Perhaps you can show me how you are sending the data where you see the loss?

    You are still using SDK15.3.0, right? The reason I ask is that from SDK16.0.0 they did something to the ble_nus_c_string_send() which in my opinion makes it unreliable. The 15.3.0 implementation of this should be good.

  • Was off until today.

    If scanning interrupts connection events  and these events back up, eventually there must be losses at the app level. This Nordic page (and others) are pretty clear about how connection events can be affected by scanning: Suggested intervals and windows.

    Since sending my last comment, I have tried setting very small scanning windows and this has not improved throughput.

    As stated in our last post and this post, we are not using NUS anymore. We are now able to use the underlying api support to get connection handles and reference connections using these handles. NUS did not give us access to handles. This is all described in the last post. This code is working fine if we have slow data updates (2 peripherals notifying us every 500 mSec with 160 bytes of data). The issue is if these peripherals notify at 250 mSec freq (160 bytes), we can miss data from 1 of the 2 peripherals (all notifications from that peripheral). 

    Please re-read the top of this post as it clarifies how we are sending data. 

    Yes, still using SDK 15.3.0, but as stated not NUS.

    Could the be related to the code servicing BLE notifications not getting enough bandwidth? My main loop is in an earlier post in this thread. Empirical testing shows that loop time is < 10 uSec, so I would expect that this is fast enough.

    Would it be faster to poll peripherals instead of their using notifications to send the central data? This seems counter-intuitive, but we need to get a solution quickly. We have customers waiting for our device firmware with support for 10 peripherals and this is the last hurdle to completion.

    Mark J

  • Mark J said:
    As stated in our last post and this post, we are not using NUS anymore. We are now able to use the underlying api support to get connection handles and reference connections using these handles.

     I am sorry for mixing up. Please remember that we handle many cases every day, so when there are days between looking at a ticket, I sometimes forget details.

    Mark J said:
    As stated in our last post and this post, we are not using NUS anymore. We are now able to use the underlying api support to get connection handles and reference connections using these handles.

    The ble_nus_c.c file doesn't forward the connection handle to the event handler. This can easily be fixed by adding:

    ble_nus_c_evt.conn_handle = p_ble_evt->evt.gattc_evt.conn_handle;

    in on_hvx() in ble_nus_c.c. But I'll stop nagging about this now. Just in case you wondered why. 

    Can you check that the peripherals that are sending this data is calling  sd_ble_gatts_hvx(), and that this returns NRF_SUCCESS on the data you claim to be missing?

    Note that if the peripheral is never actually getting the data through, then the continuous calls to sd_ble_gatts_hvx() will eventually return NRF_ERROR_RESOURCES, before disconnecting.

    Have you tried to sniff the connections? Can you see the data from both devices over the air? The nRF Sniffer should be sufficient for this sniffer trace. You can only sniff one connection at the time, so try to sniff the connection that you don't see the data from.

    Best regards,

    Edvin

  • Hi ,

    I apologize if I sounded unappreciative with my note about NUS. I appreciate the assistance.

    I am using APP_ERROR_CHECK(err_code) to check the response from each sd_ble_gatts_hvx() call and I am not seeing anything other than NRF_SUCCESS.

    I added some logging around the update of connection parameters and learned that my peripherals were  not letting my central drop the connection interval as low as expected. I can change this as required. The issue is that the peripheral min connection supported is too high.

    I also have found and fixed a bug in my peripheral code that lead to too few sd_ble_gatts_hvx() calls. That very likely (to be confirmed with more testing) addressed the Bluetooth notification issue. I am now struggling with a USB device comms issue (likely my bug or a LInux CDC driver issue). 

    I will need more time to dig into and fix the USB issue before properly testing the Bluetooth notification fix.

    Thanks,

    Mark J

  • Hello Mark,

    Glad to hear that you are on the right track!

    I just wanted to let you know that I am leaving for vacation in the beginning of next week, so if you discover any new issues after the workday tomorrow (Norwegian time), I suggest that you create a new ticket. You can always link to this one for background information.

    Best regards,

    Edvin

Reply Children
  • Hi Edvin,

    I hope that you enjoyed you time off this summer.

    Not sure if it is appropriate to close this off with a verified answer or continue it.

    I have gotten my previous use case (10 peripherals each notifying every 250 mSec with 160 bytes of data). For the record, my config is:

    Peripheral and Central:

    Min and Max connection interval of 150 mSec.

    Slave Latency of 0

    Connection supervisory timeout of 8000 mSec.

    sdk_conffig.h - NRF_SDH_BLE_GAP_DATA_LENGTH 251

    sdk_conffig.h - NRF_SDH_BLE_GATT_MAX_MTU_SIZE 247

    sdk_conffig.h - NRF_SDH_BLE_GAP_EVENT_LENGTH 6

    I have another use case. I need to get more data delivered to my central device every 250 mSec from fewer (4) peripheral devices. Specifically - 2400 bytes from each of the 4 peripherals every 250 mSec. If more efficient to get more data less often, I can also live with 9600 bytes every 1 sec from each of the 4 peripherals.

    To get high throughput from a single peripheral in the past, I have had to dramatically increase NRF_SDH_BLE_GAP_EVENT_LENGTH. I have followed 's advice as shown here: nRF52840 Speed Improvement Post. This confuses me. If I am able to use the 2 Mbit/s phy and MTU of 247, I would expect much shorter event lengths are required. At 2 Mbit/s, I should be able to send 2400 bytes in approx 12 mSec (1/2000000 * 10 bits/byte * 2400 bytes). I recognize that it will take longer than 12 mSec given that many packets within a connection interval are required and there will be time between these packets, however it should be not much more than 12 mSec. The post linked above includes a suggestion to increase the NRF_SDH_BLE_GAP_EVENT_LENGTH to 400 1.25 mSec units or 500 mSec. This seems huge. 

    Firstly - does my new paradigm require a new devzone submission or post? If yes, this reply can serve as an answer and I can create a new post. If no, can you please advise:

    Are my data throughput expectations reasonable?

    What kind of GATT and GAP params are suggested given that my peripherals will only advertise every 500 mSec and that no scanning will occur on the central when servicing these 4 peripheral connections to handle their notifications.

    Thanks for your help as always.

    Mark J

  • Hello Mark,

    Thank you! I did Slight smile

    Let's keep going here for now, as it is still related to the title. 

    So the device receiving data is the central, and the throughput from each peripheral is.

    T1 = 2400 bytes * 8bits/byte * 4/s = 76 800bps

    and for all 4 devices:

    T4 = 4*T1 = 307 200bps, or 300kbps.

    So this seems plausible with 2MBPS. However, I don't know what range you are looking at between the central and the peripherals. Using 2MBPS you will start dropping packets quickly if the range increases. The BLE stack will handle retransmissions, but this means a lower throughput.

    According to the link you refer to: The maximum throughput that you can get from a link is when you have a fairly long connection interval, and a long connection event. This is the case for one link. If you have 4, like in your case, you obviously have a limitation in how much time you can spend on each link. I suggest you try to set the connection event = 1/4 of the connection interval (150/4ms). Try that. I don't think it would hurt to increase the connection interval even more. Perhaps 400ms, and then use 100ms connection event length.

    You can play around with the parameters using the ble_app_att_mtu_throughput example from the SDK. 

    Have you tested any parameters?

    BR,
    Edvin

  • Hi ,

    To answer your first question... Range will typically be within 2 to 20 meters. Sometimes this will be outdoors with only the ground supporting reflection of wireless between central and peripherals (ie. when they do not have a direct line of sight). That (I think) will be our most challenging setup.

    Our calculations for payload transition times line up (76800 bps per peripheral). If all packets get through on a 2Mbps phy, and ignoring the minimal expected time between packets, that should require 76800/2000000 or 38.4 mSec per sec to handle data from each of the 4 peripherals. The only reason for loosing data from any of the 4 peripherals would be many retransmissions. Is there a way to limit retransmissions to 3 or 4? It is preferable to loose a packet here or there and still keep data moving from all 4 peripherals.

    The above calculation also begs the question ... why setup NRF_SDH_BLE_GAP_EVENT_LENGTH to huge number like 500 mSec when a connection supporting 2 Mbps should support event data delivery in much less time than 500 mSec?

    I will try try your suggestions for connection interval and event times and reply in this thread.

    Thanks,

    Mark J

  • Mark J said:
    The only reason for loosing data from any of the 4 peripherals would be many retransmissions. Is there a way to limit retransmissions to 3 or 4?

     Unfortunately, no. That is a limitation in the BLE spec. All packets will be retransmitted until they either are Acked, or the link will die trying. This means that the only way a packet is aborted is if you have a disconnect. 

     

    Mark J said:
    Our calculations for payload transition times line up (76800 bps per peripheral). If all packets get through on a 2Mbps phy, and ignoring the minimal expected time between packets, that should require 76800/2000000 or 38.4 mSec per sec to handle data from each of the 4 peripherals.

     I don't know how you did these calculations, and they may be correct, but remember that there are headers on the packets, ramp up on the radio, and some more HW things (which I don't know the details of). See here:

    https://devzone.nordicsemi.com/nordic/power/w/opp/2/online-power-profiler-for-ble

    So basically, the payload throughput you can expect is listed here:

    https://infocenter.nordicsemi.com/topic/sds_s140/SDS/s1xx/ble_data_throughput/ble_data_throughput.html

    Near the bottom there you see the maximum theoretical throughput with 2MBPS, which is 1376.5Mbps (NB: bits, not bytes).

    When you divide this on 4 links, it means that you need 4 times the headers, and radio ramp up, and so on, so realistically it is 1376/4, and then take away a bit more. This is why I suggested setting the event length = conn_interval/4, because this gives a more realistic throughput for each link. It means it can only spend 1/4th of the time on that connection.

    As mentioned, the throughput on 2MBPS is more sensitive to distance and radio signal strength than 1MBPS, so you simply have to test this in the environment where the product will be used. 

  • Hi ,

    We have been working with slower BLE comms for since I last replied to you. I have tested with longer connection event times and longer connection intervals as you have suggested, but these changes did not improve comms. If I send notifications from my peripheral(s) too quickly, some notifications never make it to my central device.

    First, some background and detail to re-familiarize you with our issue ... we have developed 4 IoT sensors and an IoT gateway using nRF52840 modules. We are using S140 and SDK 15.3 on both sensors (BLE peripheral roles) and gateway (BLE central role). In case it matters, we are using SES for dev. We started using NUS, but dropped it due to overcomplexity and difficulting in getting simple connection handles (silly BLE_NUS_DEF was part of the issue).  We are setting up peripheral notifications to the central without issue and we are successful at passing data - until we send notifications too quickly or from too many peripherals at a time. 

    To facilitate fast notifications - we have updated the following BLE parameters:

    • ATT MTU size. Selected the max supported 251 byte MTU size within Nordics’ sdk_config.h file (#define NRF_SDH_BLE_GATT_MAX_MTU_SIZE 251). Note that MTU size can be negotiated between central and peripheral, but the simpler approach of setting this on both central and peripheral is used. Logged events (after connection) clarify that this is correctly set and used within central and peripheral.
    • GAP PDU packet size. Select the max supported 251 byte packet size within Nordic’s sdk_config.h file (#define NRF_SDH_BLE_GAP_DATA_LENGTH 251). Note that packet size can be negotiated between central and peripheral, but the simpler approach of setting this on both central and peripheral is used.Logged events (after connection) clarify that this is correctly set and used within central and peripheral.
    • Event length in time (#define NRF_SDH_BLE_GAP_EVENT_LENGTH 8 - 1.25 mSec counts or 10 mSec). This is fixed in both central and peripheral. 5 mSec was also tested. More on this below.
    • Selected baseband PHY of 2 Mbps. This must be negotiated and requested from the central for any Phy other than 1 Mbps (the default). Logged events (after connection) clarify that this is correctly set and used within central and peripheral.
    • Data connection event length extension support is enabled. This is not negotiated and is directly set within both central and peripheral (it may only need to be set in the peripheral, but Nordic is unclear). It is set during GAP init (sd_ble_opt_set(BLE_COMMON_OPT_CONN_EVT_EXT, &ble_opt … with ble_opt.common_opt.conn_evt_ext.enable = 1). We understand that with connection event length extension support in place, multiple packets can be sent within each connection interval during the connection event for each connection - whenever they have more data to send (i.e. more than 247 bytes).

    Required sample data throughput at each peripheral can be calculated using (s/s denotes samples/sec):

    1600 s/s x 6 bytes/s = 9600 bytes/s. 9600 bytes/s 8 bits/byte = 76800 bits/s or bps.

    At the central device, this means that 76800 x 4 =307200 bps or 307.2 kbps is required. This should be (in theory) feasible when a baseband of 2 Mbps or even 1 Mbps is selected.

    Critically - whenever a BLE packet is sent, it should be as full as possible to remove any waste within packet transmission. Peripheral firmware ensures that 40 samples are sent in each packet (240 bytes).

    Using the sample rate of 1600 s/s and understanding that BLE central notification will occur every 40 samples (for packet fill efficiency), the required peripheral notification frequency is: 1600 s/s / 40 s/notification = 40 notifications/s or a notification period of: 1/40 notifications/s = 0.025 s/notification or 25 mSec per notification (ie notification with 240 bytes must be sent every 25 mSec or higher).

    Time to transfer 40 6-byte samples using a 1 Mbps Phy: (40s * 6 bytes/s) x 8 bits/byte = 1920 bits … 1920 bits / 1000000 bps = 0.001920 or 1.92 mSec of course it would take ½ of this time using a 2 Mbps Phy (0.96 mSec).

    Given the above, Selecting a connection interval less than 25 mSec should ensure that each connection is serviced with frequency to support offload of all acquired 40 samples every connection event. Selecting 20 mSec has been tested. The connection event time for each of 4 connections must be 5 mSec or less to fit in the 20 mSec connection interval time. If required, 2 packets can be sent within the connection event time (even with 1 Mbps Phy). The following diagram depicts this scenario while ignoring the amount of time between connection events and subsequent connection intervals as this time is assumed to be uSec.

    Empirical testing of the above scenario (2 Mbps Phy) with a single peripheral indicates that 5 mSec is not enough time to service each connection event. 10 mSec connection event time works much better and a 20 mSec connection interval is retained (less than 25 mSec as required). So, notifications from a single peripheral (40 samples or 240 bytes every 25 mSec) with Phy of 2 Mbps, connection event time of 10 mSec and connection interval of 20 mSec are working. No notifications are lost. Keeping the connection event time at 10 mSec and connection interval at 20 mSec and dropping the Phy to 1 Mbps results in packet loss of 1/6000 - which we can live with. So. this works:

    (Note only 2 peripherals supported)

    Increasing the connection event interval past 25 mSec or reducing the connection event time below 10 mSec results in lost notifications. 

    We have also tried increasing both connection event time and the connection interval (e.g. 50 mSec connection event time and 200 mSec connection interval). We buffered samples to support sending more of them less frequently - expecting that the longer connection event time (and data event length extension support enable) would send many samples per connection event, empirical testing suggests that no more than 1 packet is being sent every connection event.When using connection event time of 50 mSec and connection interval of 200 mSec, even more notifications were lost (50%).

    Until we can increase our connection interval to greater than 4 x our connection event time and somehow keep our connection interval lower than 25 (or send more than 240 bytes per connection event), we cannot support notification from 4 peripherals. Using a connection event time of 10 mSec and connection event interval of 20 mSec is supporting notifications from 1 or 2 peripherals only.

    Our testing leads us to wonder ... 

    1. Why do connection event times need to be so long - even when a short connection interval is used? We would expect 240 bytes to transfer in 1 or 2 mSec. Why do we have to set this to 10 mSec. Is this for many retries?
    2. Why do connection intervals need to be so short? This suggests that only a single 240 byte notification can be supported per connection interval (within a connection event). We send 240 bytes every 25 mSec and if we use a connection interval longer than this, notifications are lost suggesting that only 1 notification is going out per connection interval.
    3. How can we confirm that the connection event length extension is enabled? sd_ble_opt_set() is returning nrf_success, but this is the only indication we are aware of. Setting other parameters results in an event that we log as confirmation of parameter set, but there is no event resulting from a change in the connection event length extension enable flag set. 
    4. Does the connection event length extension need to be set in just the peripheral or both central and peripheral? The throughput eg is unclear on this as it compiles and runs as both peripheral and central.

    If you have any other suggestions on how we can read 240 bytes every 25 mSec using notifications from 4 peripherals, we would appreciate the advice. Of course 480 every 50 mSec or 720 every 75 mSec, etc ... is also a valid option.

    Thanks,

    Mark J

Related