Effect of Slave Latency

Paddy_B gravatar image

asked 2017-11-14 16:12:05 +0100

updated 2017-11-14 16:12:42 +0100

This question is similar to this thread: https://devzone.nordicsemi.com/questi... However it was framed differently and didn't really clear anything up for me.

Say my application requires a slave to send very low latency, potentially very infrequent messages.

I would want to:

Minimise Connection Interval: 7.5ms

Maximise Slave Latency: 499

So the slave would only "need" to be active every 3.75s

The slave "must" receive from the master every 500th interval, but what if it doesn't? Am I right in thinking the link would not be lost until Supervision Timeout occurs? If so, and Supervision Timeout >> 500*Connection Interval (eg 32s) what is the effect of Slave Latency?

edit retag flag offensive close delete report spam

1 answer

Sort by » oldest newest most voted
endnode gravatar image

answered 2017-11-14 18:15:37 +0100

It's clearly written in the BT SIG spec:

  • Unless Master applies Master Latency it must broadcast each interval.
  • If there is Slave Latency > 0 agreed on the link the Slave can avoid reply (= so it can also avoid listening and save interesting amount of power) if it has nothing to do.
  • If Slave has something to do it can indeed wake sooner then after entire Slave Latency period. So your requirement for low latency (when Slave has something to say) isn't in contradiction with using Slave Latency.
  • However if you also require low latency from Master side then you cannot set Slave Latency to higher value because then it would be possible that Slave goes to sleep and Master has something to day in next interval but must wait until Slave starts to listen again.
  • Drawback of Slave Latency is that your Slave must have good clock source. Normally BLE kind-of calibrates itself by counting all timers from very last radio event. Once you miss lot of events you are in danger that drift of both clocks (on Master and Slave) will be so big that they won't meet once the Slave Latency is over (everything on BLE is driven by timers and all Tx/Rx events are limited to few milliseconds or rather microseconds in certain cases).
  • Finally overall connection link is guarded by Supervision timout on both sides. So until it expires (= no PDU from other side since last correct PDU exchange) he device can and should try to hit every connection interval. In case that Slave goes back from Slave Latency sleep window and it doesn't receive PDU from Master on expected channel in expected time it should move to next channel and time in sequence and try it again. So if it's normal interference-based packet loss it should be recovered. If the devices are out of range or clocks drifted too far so they don't exercise Tx/Rw windows on same channels at the same time then connection will break due to Supervision Timeout.

So practically for your example: setting 7.5ms connection interval, Slave Latency 500ms and Supervision Timeout in 2-32s makes perfect sense (if you are fine that during activity time Slave will broadcast very often and that Master will do Tx/Rx every 7.5ms = relatively high power demand).

edit flag offensive delete publish link more


Thanks for your reply, just to clarify:

From the perspective of the master, there is no change in behaviour between the expiry of Slave Latency and Supervision Timeout.

From the perspective of the slave, the time between the two should be spent attempting to receive a master transmission by listening for transmissions (on multiple channels if necessary).

So it would be technically possible to have a slave sleep for 30s at a time (by not immediately attempting to receive after Slave Latency expires) without losing the connection, it's just very unlikely to be achieved due to drift in the clock.

Is my understanding correct?

Paddy ( 2017-11-20 15:48:51 +0100 )editconvert to answer

Well more or less, it's just symmetrical from both sides:

  • If Master or Slave enters into inactive mode because it executes Master/Slave Latency as per connection parameters (= it has nothing to send and nothing is being transferred in) then it's more or less equal to loss of packets. The opposite side either repeats the packet (if it is Master) or listens (if it is Slave) until Link Layer continues in PDU exchange or until Supervision Timeouts expires.
  • Once Supervision Timeout expires (regardless if Latency was executed by any side or not) then controller should disconnect immediately and go back to initial GAP state (Observer/Broadcaster/Inactive).

However if both sides are able to keep clock drift under control for longer time you can play with Supervision Timeout and have "gaps" in communication (= save power) up to 32s (if I recall specificaiton correctly).

endnode ( 2017-11-20 16:53:02 +0100 )editconvert to answer

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer. Do not ask a new question or reply to an answer here.

[hide preview]

User menu

    or sign up

Recent questions

Question Tools



Asked: 2017-11-14 16:12:05 +0100

Seen: 94 times

Last updated: nov. 14 '17