Peak in 'Failed to Encrypt' Error

Hi!
We have tags based on nrf52840 and nrf52833 in the field, running against a central with a nrf840X dongle.

Recently we have had an inexplicable peak in 'Failed to Encrypt' Errors, without a good reason.

The overall connectivity (ie - tags that connected before) has not been harmed - however it takes longer than usual for the tags to connect.

Is there a known reason for something like this?

Thanks!

Roi

Parents
  • Hello Roi,
    Don't know your applications layout and FW specific, but in general once you have more peripherals connected at the same time in the same vicinity they will be on a schedule and they could cause overlap and noise to each other for the central point of view. So when a new peripheral is trying to connect, the central might have limited RX window to receive the advertising and handshaking completely with the new peripheral, so it take longer time to find and establish another connection.
    Do these "Failed to Encrypt" happen when there are many active connections only at the central device? What if you switch off some of the peripherals, does the error rate and connection time for new peripheral improve?
    What is the range between the various units? Is it stable or is it a variable parameter as well?
    Best regards
    Asbjørn 
  • Hi,
    I think we have found the issue promptly after opening the ticket - we are currently testing if this has fixed the issue and we will respond after a few days of testing.

    Regarding your question - the peripherals do not move often in space, and the main difference we are seeing now after the fix is in a central with *4 more peripherals that advertise *6 more often has the higher rate of the error.

    Thanks!

    Roi

  • Hello Roi,
    thank you for the feedback and hope you found a good way forward. Best of luck and please update if you are able to on whatever you find if it seems like a configuration issue.
    Best regards
    Asbjørn
  • Hi!
    As we thought - the error is a high-level error on our end, wherein we added a heavy IO process while scanning. 
    This caused a higher level of disconnects during the scanning process because the scan --> connect --> disconnect loop was harmed by the slowed down times between scan --> connect.

    Thanks!

    Roi

  • Hi Roi,

    It sounds like you have found the way forward, thank you for the feedback and let us know if you have any further questions.

    BR

    Asbjørn

Reply Children
No Data
Related