This question is similar to this thread:
However it was framed differently and didn't really clear anything up for me.
Say my application requires a slave to send very low latency, potentially very infrequent messages.
I would want to:
Minimise Connection Interval: 7.5ms
Maximise Slave Latency: 499
So the slave would only "need" to be active every 3.75s
The slave "must" receive from the master every 500th interval, but what if it doesn't?
Am I right in thinking the link would not be lost until Supervision Timeout occurs?
If so, and Supervision Timeout >> 500*Connection Interval (eg 32s) what is the effect of Slave Latency?
It's clearly written in the BT SIG spec:
So practically for your example: setting 7.5ms connection interval, Slave Latency 500ms and Supervision Timeout in 2-32s makes perfect sense (if you are fine that during activity time Slave will broadcast very often and that Master will do Tx/Rx every 7.5ms = relatively high power demand).
Thanks for your reply, just to clarify:
From the perspective of the master, there is no change in behaviour between the expiry of Slave Latency and Supervision Timeout.
From the perspective of the slave, the time between the two should be spent attempting to receive a master transmission by listening for transmissions (on multiple channels if necessary).
So it would be technically possible to have a slave sleep for 30s at a time (by not immediately attempting to receive after Slave Latency expires) without losing the connection, it's just very unlikely to be achieved due to drift in the clock.
Is my understanding correct?
Well more or less, it's just symmetrical from both sides:
However if both sides are able to keep clock drift under control for longer time you can play with Supervision Timeout and have "gaps" in communication (= save power) up to 32s (if I recall specificaiton correctly).