This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Connection Parameter Selection for Central Device with 10 Peripheral Simultaneous Connections

Hi,

I am developing both Central and Peripheral devices with nRF52840. For consistency, both are using SDK 15.3.0 and S140 V7.

I am able to connect to peripherals and negotiate MTU ATT size of 247 and I am able to negotiate 2 Mbps phy. Logging on both peripheral and central devices confirm this is working.

My challenge is getting notification data from my peripherals to my central. If I set up (for example) 2 peripherals to notify my central device every 500 mSec with approx 160 bytes of data, all data comes into the BLE event handler (BLE_GATTC_EVT_HVX) within my central device just fine. If I double the data speed on peripherals (ie. 160 bytes every 250 mSec from each of the 2 devices), I only get BLE_GATTC_EVT_HVX events notifying me of incoming data from the first peripheral only.

I believe that I am not getting correct connection service intervals setup after peripheral connection.

For a scenario where a central is talking to 10 peripherals and getting notification data from each every 250 mSec, what would good connection interval, slave latency values be? I cannot seem to find a good reference for setup of connection intervals for multiple simultaneous peripheral connections to a central device - where each peripheral is notifying the central independently.

Note that my connection event length is 6 x 1.25 mSec or 7.5 mSec. This should be plenty long enough to get an MTU of 247 bytes transferred.

I have been using default min and max connection intervals of 7.5 and 30 mSec respectively and a slave latency of 0 (for both central and peripheral devices).

Suggestions for my scenario would be much appreciated.

Thanks in advance,

Mark J

  • Hi ,

    We have been working with slower BLE comms for since I last replied to you. I have tested with longer connection event times and longer connection intervals as you have suggested, but these changes did not improve comms. If I send notifications from my peripheral(s) too quickly, some notifications never make it to my central device.

    First, some background and detail to re-familiarize you with our issue ... we have developed 4 IoT sensors and an IoT gateway using nRF52840 modules. We are using S140 and SDK 15.3 on both sensors (BLE peripheral roles) and gateway (BLE central role). In case it matters, we are using SES for dev. We started using NUS, but dropped it due to overcomplexity and difficulting in getting simple connection handles (silly BLE_NUS_DEF was part of the issue).  We are setting up peripheral notifications to the central without issue and we are successful at passing data - until we send notifications too quickly or from too many peripherals at a time. 

    To facilitate fast notifications - we have updated the following BLE parameters:

    • ATT MTU size. Selected the max supported 251 byte MTU size within Nordics’ sdk_config.h file (#define NRF_SDH_BLE_GATT_MAX_MTU_SIZE 251). Note that MTU size can be negotiated between central and peripheral, but the simpler approach of setting this on both central and peripheral is used. Logged events (after connection) clarify that this is correctly set and used within central and peripheral.
    • GAP PDU packet size. Select the max supported 251 byte packet size within Nordic’s sdk_config.h file (#define NRF_SDH_BLE_GAP_DATA_LENGTH 251). Note that packet size can be negotiated between central and peripheral, but the simpler approach of setting this on both central and peripheral is used.Logged events (after connection) clarify that this is correctly set and used within central and peripheral.
    • Event length in time (#define NRF_SDH_BLE_GAP_EVENT_LENGTH 8 - 1.25 mSec counts or 10 mSec). This is fixed in both central and peripheral. 5 mSec was also tested. More on this below.
    • Selected baseband PHY of 2 Mbps. This must be negotiated and requested from the central for any Phy other than 1 Mbps (the default). Logged events (after connection) clarify that this is correctly set and used within central and peripheral.
    • Data connection event length extension support is enabled. This is not negotiated and is directly set within both central and peripheral (it may only need to be set in the peripheral, but Nordic is unclear). It is set during GAP init (sd_ble_opt_set(BLE_COMMON_OPT_CONN_EVT_EXT, &ble_opt … with ble_opt.common_opt.conn_evt_ext.enable = 1). We understand that with connection event length extension support in place, multiple packets can be sent within each connection interval during the connection event for each connection - whenever they have more data to send (i.e. more than 247 bytes).

    Required sample data throughput at each peripheral can be calculated using (s/s denotes samples/sec):

    1600 s/s x 6 bytes/s = 9600 bytes/s. 9600 bytes/s 8 bits/byte = 76800 bits/s or bps.

    At the central device, this means that 76800 x 4 =307200 bps or 307.2 kbps is required. This should be (in theory) feasible when a baseband of 2 Mbps or even 1 Mbps is selected.

    Critically - whenever a BLE packet is sent, it should be as full as possible to remove any waste within packet transmission. Peripheral firmware ensures that 40 samples are sent in each packet (240 bytes).

    Using the sample rate of 1600 s/s and understanding that BLE central notification will occur every 40 samples (for packet fill efficiency), the required peripheral notification frequency is: 1600 s/s / 40 s/notification = 40 notifications/s or a notification period of: 1/40 notifications/s = 0.025 s/notification or 25 mSec per notification (ie notification with 240 bytes must be sent every 25 mSec or higher).

    Time to transfer 40 6-byte samples using a 1 Mbps Phy: (40s * 6 bytes/s) x 8 bits/byte = 1920 bits … 1920 bits / 1000000 bps = 0.001920 or 1.92 mSec of course it would take ½ of this time using a 2 Mbps Phy (0.96 mSec).

    Given the above, Selecting a connection interval less than 25 mSec should ensure that each connection is serviced with frequency to support offload of all acquired 40 samples every connection event. Selecting 20 mSec has been tested. The connection event time for each of 4 connections must be 5 mSec or less to fit in the 20 mSec connection interval time. If required, 2 packets can be sent within the connection event time (even with 1 Mbps Phy). The following diagram depicts this scenario while ignoring the amount of time between connection events and subsequent connection intervals as this time is assumed to be uSec.

    Empirical testing of the above scenario (2 Mbps Phy) with a single peripheral indicates that 5 mSec is not enough time to service each connection event. 10 mSec connection event time works much better and a 20 mSec connection interval is retained (less than 25 mSec as required). So, notifications from a single peripheral (40 samples or 240 bytes every 25 mSec) with Phy of 2 Mbps, connection event time of 10 mSec and connection interval of 20 mSec are working. No notifications are lost. Keeping the connection event time at 10 mSec and connection interval at 20 mSec and dropping the Phy to 1 Mbps results in packet loss of 1/6000 - which we can live with. So. this works:

    (Note only 2 peripherals supported)

    Increasing the connection event interval past 25 mSec or reducing the connection event time below 10 mSec results in lost notifications. 

    We have also tried increasing both connection event time and the connection interval (e.g. 50 mSec connection event time and 200 mSec connection interval). We buffered samples to support sending more of them less frequently - expecting that the longer connection event time (and data event length extension support enable) would send many samples per connection event, empirical testing suggests that no more than 1 packet is being sent every connection event.When using connection event time of 50 mSec and connection interval of 200 mSec, even more notifications were lost (50%).

    Until we can increase our connection interval to greater than 4 x our connection event time and somehow keep our connection interval lower than 25 (or send more than 240 bytes per connection event), we cannot support notification from 4 peripherals. Using a connection event time of 10 mSec and connection event interval of 20 mSec is supporting notifications from 1 or 2 peripherals only.

    Our testing leads us to wonder ... 

    1. Why do connection event times need to be so long - even when a short connection interval is used? We would expect 240 bytes to transfer in 1 or 2 mSec. Why do we have to set this to 10 mSec. Is this for many retries?
    2. Why do connection intervals need to be so short? This suggests that only a single 240 byte notification can be supported per connection interval (within a connection event). We send 240 bytes every 25 mSec and if we use a connection interval longer than this, notifications are lost suggesting that only 1 notification is going out per connection interval.
    3. How can we confirm that the connection event length extension is enabled? sd_ble_opt_set() is returning nrf_success, but this is the only indication we are aware of. Setting other parameters results in an event that we log as confirmation of parameter set, but there is no event resulting from a change in the connection event length extension enable flag set. 
    4. Does the connection event length extension need to be set in just the peripheral or both central and peripheral? The throughput eg is unclear on this as it compiles and runs as both peripheral and central.

    If you have any other suggestions on how we can read 240 bytes every 25 mSec using notifications from 4 peripherals, we would appreciate the advice. Of course 480 every 50 mSec or 720 every 75 mSec, etc ... is also a valid option.

    Thanks,

    Mark J

  • Hello Mark,

    Try to enable debug logging (NRF_LOG_ENABLED 1 and NRF_LOG_DEFAULT_LEVEL 4 in sdk_config.h) and connect the devices. If you do this e.g. in the ble_app_uart + ble_app_uart_c examples, you will see the logging from the BLE_GAP_EVT_DATA_LENGTH_UPDATE event printed from nrf_ble_gatt.c:

    <debug> nrf_ble_gatt: Data length updated to 251 on connection 0x0.
    <debug> nrf_ble_gatt: max_rx_octets: 251
    <debug> nrf_ble_gatt: max_tx_octets: 251
    <debug> nrf_ble_gatt: max_rx_time: 2120
    <debug> nrf_ble_gatt: max_tx_time: 2120

    The time is given in µs. Is the connection rx time and tx time what you expect them to be?

    When you say that you are loosing packets, do you claim that packets that are queued using sd_ble_gatts_hvx() are queued successfully (returns NRF_SUCCESS), but the packets never occur?

    BR,

    Edvin

  • HI ,

    I have enabled logging of the length update on both peripherals and central (Segger logging). For both, I am simply handling the ble event:

    case BLE_GAP_EVT_DATA_LENGTH_UPDATE:
    {
      p_datalen_evt = &p_ble_evt->evt.gap_evt.params.data_length_update;
    
      SEGGER_RTT_printf(0, "Data length (PDU) update. Max Tx octets: %d, Max Rx octets %d, Max Tx uSec: %d, Max Rx uSec: %d\n",
        p_datalen_evt->effective_params.max_tx_octets, p_datalen_evt->effective_params.max_rx_octets, 
        p_datalen_evt->effective_params.max_tx_time_us, p_datalen_evt->effective_params.max_rx_time_us);
    
        break;
    }

    Logger output is the same for peripherals and central:

    Data length (PDU) update. Max Tx octets: 251, Max Rx octets 251, Max Tx uSec: 2120, Max Rx uSec: 2120

    I am setting the connection event length identically for peripheral and central in sdk_config.h:

    #ifndef NRF_SDH_BLE_GAP_EVENT_LENGTH
    #define NRF_SDH_BLE_GAP_EVENT_LENGTH 8  // See xtag
    #endif  

    The 8 x 1.25 mSec entered should stipulate tx and rx max times of 10 mSec and not 2.12 mSec as logged, shouldn't they? Maybe there is an issue in how the connection event length is setup (ie more is required beyond setting NRF_SDH_BLE_GAP_EVENT_LENGTH  to 8).

    To directly answer your last question ... yes. I am getting NRF_SUCCESS returned after calling sd_ble_gatts_hvx() on peripherals to notify the central with data. On the central, I am processing BLE_GATTC_EVT_HVX events resulting from these notifications. When I continuously write data (240 bytes discussed above) I count sd_ble_gatts_hvx() calls on peripherals and resulting BLE_GATTC_EVT_HVX events on the central. If I don't send notifications too fast, counts on peripherals and central match. Only when I send data too quickly (as discussed in this thread), do I see count mismatches. The count is higher on the peripherals than central - due to some notifications not getting through.

    Is there any code change suggested to get the connection event length setup correctly (assuming that the 2120 uSec is incorrect given that I set NRF_SDH_BLE_GAP_EVENT_LENGTH  to 8)?

    Thanks,

    Mark J

  • ... just to confirm ....

    I am calling nrf_sdh_ble_default_cfg_set() with NRF_SDH_BLE_TOTAL_LINK_COUNT > 0 on both peripherals and central. Within the implementation of nrf_sdh_ble_default_cfg_set(), my NRF_SDH_BLE_GAP_EVENT_LENGTH  value of 8 is used. So, I think that I am setting connection event length correctly to 10 mSec.

    Thanks,

    Mark J

  • The event length sets the maximum event length. The length printed in the event is the time set aside to send 251 octets. 

    I would like to reproduce this from my side. How do I do that? Preferably, you can zip and send the projects that you use to replicate this issue. I don't remember if I have asked for this before. It has been a while since we discussed this ticket (before your post 3 days ago). I would like to see what the lost notifications look like.

Related