This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Effect of Slave Latency

This question is similar to this thread: devzone.nordicsemi.com/.../ However it was framed differently and didn't really clear anything up for me.

Say my application requires a slave to send very low latency, potentially very infrequent messages.

I would want to:

Minimise Connection Interval: 7.5ms

Maximise Slave Latency: 499

So the slave would only "need" to be active every 3.75s

The slave "must" receive from the master every 500th interval, but what if it doesn't? Am I right in thinking the link would not be lost until Supervision Timeout occurs? If so, and Supervision Timeout >> 500*Connection Interval (eg 32s) what is the effect of Slave Latency?

Parents
  • It's clearly written in the BT SIG spec:

    • Unless Master applies Master Latency it must broadcast each interval.
    • If there is Slave Latency > 0 agreed on the link the Slave can avoid reply (= so it can also avoid listening and save interesting amount of power) if it has nothing to do.
    • If Slave has something to do it can indeed wake sooner then after entire Slave Latency period. So your requirement for low latency (when Slave has something to say) isn't in contradiction with using Slave Latency.
    • However if you also require low latency from Master side then you cannot set Slave Latency to higher value because then it would be possible that Slave goes to sleep and Master has something to day in next interval but must wait until Slave starts to listen again.
    • Drawback of Slave Latency is that your Slave must have good clock source. Normally BLE kind-of calibrates itself by counting all timers from very last radio event. Once you miss lot of events you are in danger that drift of both clocks (on Master and Slave) will be so big that they won't meet once the Slave Latency is over (everything on BLE is driven by timers and all Tx/Rx events are limited to few milliseconds or rather microseconds in certain cases).
    • Finally overall connection link is guarded by Supervision timout on both sides. So until it expires (= no PDU from other side since last correct PDU exchange) he device can and should try to hit every connection interval. In case that Slave goes back from Slave Latency sleep window and it doesn't receive PDU from Master on expected channel in expected time it should move to next channel and time in sequence and try it again. So if it's normal interference-based packet loss it should be recovered. If the devices are out of range or clocks drifted too far so they don't exercise Tx/Rw windows on same channels at the same time then connection will break due to Supervision Timeout.

    So practically for your example: setting 7.5ms connection interval, Slave Latency 500ms and Supervision Timeout in 2-32s makes perfect sense (if you are fine that during activity time Slave will broadcast very often and that Master will do Tx/Rx every 7.5ms = relatively high power demand).

  • Thanks for your reply, just to clarify:

    From the perspective of the master, there is no change in behaviour between the expiry of Slave Latency and Supervision Timeout.

    From the perspective of the slave, the time between the two should be spent attempting to receive a master transmission by listening for transmissions (on multiple channels if necessary).

    So it would be technically possible to have a slave sleep for 30s at a time (by not immediately attempting to receive after Slave Latency expires) without losing the connection, it's just very unlikely to be achieved due to drift in the clock.

    Is my understanding correct?

Reply
  • Thanks for your reply, just to clarify:

    From the perspective of the master, there is no change in behaviour between the expiry of Slave Latency and Supervision Timeout.

    From the perspective of the slave, the time between the two should be spent attempting to receive a master transmission by listening for transmissions (on multiple channels if necessary).

    So it would be technically possible to have a slave sleep for 30s at a time (by not immediately attempting to receive after Slave Latency expires) without losing the connection, it's just very unlikely to be achieved due to drift in the clock.

    Is my understanding correct?

Children
No Data
Related