Beware that this post is related to an SDK in maintenance mode
More Info: Consider nRF Connect SDK for new designs
This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Android (mostly 7) is misbehaving on BLE connect

Hi,

our BLE device has a hard current limit around 1mA. This is no problem as long as the BLE central behaves as "intended", e.g. iPhones use mostly connection intervals of 30ms which will respect the 1mA current limit. Also most Android devices obey to the peripheral connection parameter suggestions, although they use odd numbers. So no problem here.

But other Android (7?) devices behave differently: they do a connect with harmless parameters, but after a while (1s or so), out of the blue they switch to a fast connection for around 500ms. Connection interval in this case is always 7.5ms. Those 500ms are Android device dependent and also seem to depend on mood of the Android device.  Needless to say, that this breaks our current limit.

I've already tweaked with local slave latency, tuned TX power down, checked here, checked there. To no avail... :-(

Does anybody has any suggestions how to reduce the actual power consumption of the nRF5x peripheral device? Would be great, if (local) slave latency would work, or if there would be an API to set the minimum allowed connection interval.
Or is there a way to use the Timeslot API to suppress BLE intervals?

Thanks for help

Hardy

PS: googling around showed this: https://stackoverflow.com/questions/47491493/android-ble-requestconnectionpriority-not-working, especially "Android temporarily changes connection interval to 7.5 ms during the GATT service discovery".  Don't know if this is correct.  But if yes, how to slow down the nRF side?

Parents
  • Hi,

    This sounds like your Android device is trying to balance speed and power consumption. It does this by using faster connection intervals when it knows that a lot of data is about to be transmitted so that the transfer completes faster. Then, after the intensive data transfer is completed, the connection intervals are relaxed again to save power. A typical scenario is that the Android device enforces fast intervals during the connection process and service discovery, and then throttles down afterwards (as far as I know Apple devices don't use this mechanism). 

    In BLE the connection parameters are ultimately enforced by the central (your Android device). The min and max parameters you set in your peripheral application is just your preferred values and the central may chose to reject them. Furthermore, different Android devices ships with different chipsets and BLE stacks, which may behave and perform differently. 

    There is nothing you can do with TX power or timeslot API to change this. The change needs to be made on the Android side. 

    Here are two relevant threads:
    High BLE activity for 15 seconds after connection
    Questions about connection interval negotiation

    May I ask what it is that limits your current consumption to just 1mA?

  • Concerning the 1mA limitation: our device is an industrial field device (see my profile ;-)) which has really limited power consumption (3.6mA @10V or so).  Main part is going to the sensor electronic and some part is available to the BLE connectivity.

    There are also limitations to buffering capacitors due to space requirements and Ex.

    H.

  • I looked more into this today and it seems like this behaviour is embedded in Android's BLE stack as of Android 7 (I think). 

    rgrr2 said:
    the Android device is switching to "high" speed during GATT info exchange - that's my current guess.

    Yes that is correct. 

    rgrr2 said:
    Does anybody know of a way to prevent the Android device from switching speed?

    Since the behaviour is defined by Android's BLE stack this seems to be impossible. 

    rgrr2 said:
    Would it be possible to setup the Android internal GATT database so that Andoid sees no need to access the GATT data from the device?

    Android caches the GATT database after each initial connection procedure and only updates the database if anything changes. This might be why you experience different behaviour based on the device's "mood". If you bond your devices you will also ensure that the database, as well as some security parameters, are stored across connections. However, I can't see anyway around that initial "high-speed" exchange of data, and it sounds like one "high-speed" transfer is one too many in your case. 

    This is one of the difficult parts regarding BLE. Although BLE is meticulously described in a 3000 pages long specification, there are still some room for interpretation and "unique" mechanisms. Different vendors, chipsets, and stacks might behave differently even though they all (usually) comply with the specification. So the bottom line is that developing an application that expects the exact same behaviour from all peer devices is simply not possible. 

  • Ok... thanks for the explanation.  Nevertheless your conclusion is a little bit too pessimistic for me.

    I do not think that our use case is such exotic: connect an Android (>=7) smartphone with a device which has current limitation.

    Why isn't it possible to catch such situations on peripheral side?  The peripheral still follows the specification, if it would simply ignore connection intervals and leaves Rx and Tx completely off.  If I understand the mechanisms correctly, the central would do retries in this case.

    This would be close to "local slave latency", or better "forced local slave latency" because even internally queued packets wouldn't be transmitted during those ignore events.

    Does something in this direction already exist in the softdevice?  If not, I will be your fist beta tester for such a feature.

    H.

Reply
  • Ok... thanks for the explanation.  Nevertheless your conclusion is a little bit too pessimistic for me.

    I do not think that our use case is such exotic: connect an Android (>=7) smartphone with a device which has current limitation.

    Why isn't it possible to catch such situations on peripheral side?  The peripheral still follows the specification, if it would simply ignore connection intervals and leaves Rx and Tx completely off.  If I understand the mechanisms correctly, the central would do retries in this case.

    This would be close to "local slave latency", or better "forced local slave latency" because even internally queued packets wouldn't be transmitted during those ignore events.

    Does something in this direction already exist in the softdevice?  If not, I will be your fist beta tester for such a feature.

    H.

Children
Related