Disconnect latency after calling bt_conn_disconnect()

Hi,

Our application is running in the central role connected to a peripheral.  The app disconnects from the peripheral by calling the bt_conn_disconnect() function.  It appears to take from 1 to 2 seconds before the "disconnected" callback handler is called.  There is no data being pushed out over the link prior to calling the disconnect function. The connection interval is 1 second, 0 latency with a 4 second supervision timeout.  I would expect the worst case latency would be 1 second.  Is this expected?

nrf Connect SDK 2.4.4
nrf5340

Thank you

  • "The extra time to the callback is mostly host side handling delays."

    I find it weird that it would take 1 extra connection interval from the acknowledgment has been received to just process the disconnect event. The CPU is fast and there is no reason to take such long time.

  • Emil, I agree.  I understand that the time required from calling the bt_conn_disconnect() to when the LL_TERMINATE_IND packet gets transmitted may take up to one connection interval.  However, when the ACK by the slave occurs I would think the BLE stack on the central would immediately call the "disconnect" callback?  As I mentioned previously, there is no other traffic occurring on the connection prior to the LL_TERMINATE_IND packet being transmitted.  So, I believe the max latency between calling bt_conn_disconnect() and processing the disconnect callback should be one connection interval plus the time required to transmit the LL_TERMINATE_IND packet plus the time required to receive the ACK from the peripheral.  

    Does this make sense?

  • Hi

    Susheel is out until next week I'm afraid, so I need to discuss the callback behavior with him when he's back, but as he says, the only way to guarantee a faster disconnection, a shorter connection interval is necessary.

    Best regards,

    Simon

  • Thanks.  Maybe he can quantify the worst case latency for the following in terms of connection interval time. 

    1) latency between calling bt_conn_disconnect() function and transmitting the LL_TERMINATE_IND packet. 
    2) latency between peripheral receiving the LL_TERMINATE_IND packet and the  disconnect callback being invoked.

  • Kurt, 

    Thanks for your patience in waiting for me.

    kpreiss said:
    1) latency between calling bt_conn_disconnect() function and transmitting the LL_TERMINATE_IND packet. 

    I think you are trying to understand why I said the worst case here is ~2 CI (connection interval) as you might already seem to understand the best (immediately) and typical cases (< 1 CI) here. If the request reaches the controller after the controller’s scheduling cutoff for the immediately upcoming event, for example the event width is too small and there are already some packets scheduled to be transmitted in that next event,  it slips one more event: < 2·CI. I think it is not such hard to imagine that the LL scheduler can be in some scenarios decide that the request to schedule a command in outgoing packet will not be fit in the next event? Do you think this breaks the BLE Spec? Even though the typical delay should be ~1.CI , worst can be in those corner cases about ~2.CI

    kpreiss said:
    2) latency between peripheral receiving the LL_TERMINATE_IND packet and the  disconnect callback being invoked.

    there is no valid CI-based worst-case bound for the callback. Earlier I seem to have indirectly implied that the callback timing should follow the connection interval. That is not correct, the callback timing is dominated by host-side scheduling once the controller has terminated the link. The application disconnect callback is invoked only after the controller reports “Disconnection Complete” to the host and the Zephyr Bluetooth RX context gets CPU time to process it. Since callbacks are executed from that RX context, the time to the callback is not governed by connection interval and has no CI-based worst-case bound

Related