I am developing a BLE peripheral application on nRF52832 and SDK 14.3.
I have been power profiling my application and quite happy with the results, except for when the peripheral first connects. Power usage goes crazy for about 15 seconds before calming down again - see below screenshot of the output from the power profiler:
What is the device doing when this happens? The peripheral is connecting to an iPhone and is bonded.
This is clearly BLE activity but I'm pretty sure it has nothing to do with my application!
Those currents seem pretty crazy indeed. 60mA is a lot and probably wrong or caused by something else than the nRF52 alone? What are you measuring on? A development kit or do you have a custom board?
Is it possible to try connecting using the master control panel on the pc side i.e. just connect, bond, discover service and see if you get the same 60mA current drain on your end.
I am profiling a custom design with on-board Power Amplifier, so the 60mA peaks during radio activity are not a concern. The graph makes it look worse than it is - even during the "crazy" period the average current over the 15 sec is only around 500uA so not catastrophic. I'm still keen to know if it's necessary though as my application will require relatively frequent connect/disconnect cycles.
The screenshots above are showing a connection from nRF Connect on iOS - I did try to run nRF Connect on the desktop to see if it changed anything but wasn't able to get it to run at the same time as the power profiler.
I also spun up a sniffer today to see if that gave any clues - I can see a few scan/responses going back and forth while this activity is happening but I wouldn't have thought that service discovery would have taken
this long to complete?
Could it be that the devices are doing service discoveries, but using an unfortunate bandwidth configuration? What are the period between the spikes in the crazy period? What are the connection parameters in your application?
I think you may be on the right track there - here are my connection parameters:
#define MIN_CONN_INTERVAL (SECOND_1_25_MS_UNITS / 100) * 30 /*300ms*/
#define MAX_CONN_INTERVAL (SECOND_1_25_MS_UNITS / 100) * 39 /*390ms*/
#define SLAVE_LATENCY 4
#define CONN_SUP_TIMEOUT (6 * SECOND_10_MS_UNITS) /*6s*/
These connection intervals are set to be the maximum allowable for iOS - our application is not time sensitive but is range sensitive, so we have upped the connection interval and added the PA to our design to try to maximise the range while keeping a handle on power consumption.
I'm away from the lab at the moment so can't check but my guess is the spikes will line up with the connection interval.
Given the services and characteristics in our application never change is there some way I can instruct the central not to re-scan every time it connects?