This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Triks for keeping BLE connection robust.

Hello all, at present time I'm making investigation of BLE link stablility at harsh megapolis environment and at long open space ranges. During testing I found out some information and collected some questions.
The setup is: Proprietary device based on nRF518222 + S110 v.8 softdevice. The NUS service configgured to notify some data each second. The connection parameters were: ConnInterval: 30ms, SlLatency: 34, SupervisionTimeout: 10000ms.

The sniffer capturing shown:

  1. Peripheral sends empty pdu packet without considering that recently notification PDU was sent to central. So slave latency counter functionality in S110 do not monitor other outgoing packets? Why? It drains the battery what is the gain ?

  2. At long distances the slave latency counter seem to be disabled and S110 peripheral send the empty PDU at each connection event. What RSSI threshold (in dbm) is set in Softdevice to start link reinforcement.

  3. Any RSSI averaging algorithm in Softdevice before passing value through the BLE_GAP_EVT_RSSI_CHANGED event?

The brief manual on how to hold the BLE link as long as possible by means of Nordic's SoC would be appreshiated.

Edit: Sniffer_cap_link_test.pcapng

Parents
  • Hi Valer,

    1. Please provide the sniffer trace. Please be aware that you set the preferred connection parameter to have slave latency of 34 doesn't mean you have it as 34. It's the central who decide what will be used in the connection.

    2. What do you mean by this "start link reinforcement" ?

    3. RSSI is calculated on every packet receives. You can configure some parameter with sd_ble_gap_rssi_start() such as the threshold and the skip_count.

    "How to hold the BLE link as long as possible" ? The more frequently you transfer packets (with or without data) the better the link can be kept. Also longer connection timeout would help avoid the link being terminated. The trade off is more power consumption because of short interval and lower response time when the connection timeout is large, also we waste more power listen to the central when the link is terminated unexpectedly.

  • From packet 831 in the trace, you can see the slave latency is enforced.

    The way the softdevice works is it will sleep for 34 packets if there is no data needed to send. If there is a request to send data from application or from the stack itself, it will send immediately on the next connection interval. After that it will start counting from 0 again.

    The Core 4.2 states "If the slave does not receive a packet from the master after applying slave latency, it should listen at each anchor point and not apply slave latency until it receives a packet from the master." => This is applied for the moment when slave latency is just applied, the slave need to have an anchor point before it sleeps, so it need to stay awake for every packet until it receive 1 packet from central and then it can sleep.

    I said "The more frequently you transfer packets (with or without data) the better the link can be kept" was because, the better the link is kept is when we can minimum the time we don't receive packet from the central. For example, imagine the "blackout time" because of interference is 1 seconds, we have a time out of 1.5 seconds. If we have connection interval of says 500ms, we have a risk of losing the connection because with that interval we can get to the point we lost 3 connection events, and reach timeout 1.5 seconds.

    If we have only 100ms connection interval, the maximum period when we don't get packets from the central is 1.1ms and way inside the 1.5s timeout.

Reply
  • From packet 831 in the trace, you can see the slave latency is enforced.

    The way the softdevice works is it will sleep for 34 packets if there is no data needed to send. If there is a request to send data from application or from the stack itself, it will send immediately on the next connection interval. After that it will start counting from 0 again.

    The Core 4.2 states "If the slave does not receive a packet from the master after applying slave latency, it should listen at each anchor point and not apply slave latency until it receives a packet from the master." => This is applied for the moment when slave latency is just applied, the slave need to have an anchor point before it sleeps, so it need to stay awake for every packet until it receive 1 packet from central and then it can sleep.

    I said "The more frequently you transfer packets (with or without data) the better the link can be kept" was because, the better the link is kept is when we can minimum the time we don't receive packet from the central. For example, imagine the "blackout time" because of interference is 1 seconds, we have a time out of 1.5 seconds. If we have connection interval of says 500ms, we have a risk of losing the connection because with that interval we can get to the point we lost 3 connection events, and reach timeout 1.5 seconds.

    If we have only 100ms connection interval, the maximum period when we don't get packets from the central is 1.1ms and way inside the 1.5s timeout.

Children
No Data
Related