This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Triks for keeping BLE connection robust.

Hello all, at present time I'm making investigation of BLE link stablility at harsh megapolis environment and at long open space ranges. During testing I found out some information and collected some questions.
The setup is: Proprietary device based on nRF518222 + S110 v.8 softdevice. The NUS service configgured to notify some data each second. The connection parameters were: ConnInterval: 30ms, SlLatency: 34, SupervisionTimeout: 10000ms.

The sniffer capturing shown:

  1. Peripheral sends empty pdu packet without considering that recently notification PDU was sent to central. So slave latency counter functionality in S110 do not monitor other outgoing packets? Why? It drains the battery what is the gain ?

  2. At long distances the slave latency counter seem to be disabled and S110 peripheral send the empty PDU at each connection event. What RSSI threshold (in dbm) is set in Softdevice to start link reinforcement.

  3. Any RSSI averaging algorithm in Softdevice before passing value through the BLE_GAP_EVT_RSSI_CHANGED event?

The brief manual on how to hold the BLE link as long as possible by means of Nordic's SoC would be appreshiated.

Edit: Sniffer_cap_link_test.pcapng

  • Hi Valer,

    1. Please provide the sniffer trace. Please be aware that you set the preferred connection parameter to have slave latency of 34 doesn't mean you have it as 34. It's the central who decide what will be used in the connection.

    2. What do you mean by this "start link reinforcement" ?

    3. RSSI is calculated on every packet receives. You can configure some parameter with sd_ble_gap_rssi_start() such as the threshold and the skip_count.

    "How to hold the BLE link as long as possible" ? The more frequently you transfer packets (with or without data) the better the link can be kept. Also longer connection timeout would help avoid the link being terminated. The trade off is more power consumption because of short interval and lower response time when the connection timeout is large, also we waste more power listen to the central when the link is terminated unexpectedly.

  • Hi, Hung Bui , I attached the sniffer cap to initial post.

    1. I make regular connection parameters check in the firmware and request demanded if needed, most of central devices accept my slave latency value. But it up to peripheral to send empty PDU each 34 packet (in my case) or more frequent. So the actual realization of slave latency counter in softdevice is interesting for me (see Q #2).

    2. As far as I understand switching off slave latency counter if link become losy improves RSSI measurements. The Core 4.2 states "If the slave does not receive a packet from the master after applying slave latency, it should listen at each anchor point and not apply slave latency until it receives a packet from the master." (Vol.6, Part B, p.77). You say The more frequently you transfer packets (with or without data) the better the link can be kept but why? Shall we say if the packet is lost the bluetooth link layer will make retransmission of the lost packet and in this way make transfer more frequent?

  • And one more question based on your answer "...also we waste more power listen to the central when the link is terminated unexpectedly." We are listen the central during time frames that spaced with connection interval (as usual even if the link is stable). So why more power? Or I misunderstood you, sorry)?

  • From packet 831 in the trace, you can see the slave latency is enforced.

    The way the softdevice works is it will sleep for 34 packets if there is no data needed to send. If there is a request to send data from application or from the stack itself, it will send immediately on the next connection interval. After that it will start counting from 0 again.

    The Core 4.2 states "If the slave does not receive a packet from the master after applying slave latency, it should listen at each anchor point and not apply slave latency until it receives a packet from the master." => This is applied for the moment when slave latency is just applied, the slave need to have an anchor point before it sleeps, so it need to stay awake for every packet until it receive 1 packet from central and then it can sleep.

    I said "The more frequently you transfer packets (with or without data) the better the link can be kept" was because, the better the link is kept is when we can minimum the time we don't receive packet from the central. For example, imagine the "blackout time" because of interference is 1 seconds, we have a time out of 1.5 seconds. If we have connection interval of says 500ms, we have a risk of losing the connection because with that interval we can get to the point we lost 3 connection events, and reach timeout 1.5 seconds.

    If we have only 100ms connection interval, the maximum period when we don't get packets from the central is 1.1ms and way inside the 1.5s timeout.

  • Ok, for Core spec it is clear. The " "The more frequently..." also clear. Now for sniffer cap:

    1. My initial question #1, please see pkt num 840, the peripheral sends notification and it was quite clear for me that softdevice would set slave latency counter to 0. But we see Empty pdu pkt #842. Why it was sent by peripheral?

    2)Pkt #15154 and the next. We can see many Empty PDU packets from slave. At that time thew link was wery weak. They are sent by link layer for retransmissions or the slave latency counter disabled by some "weak link trigger". What trigger?

Related