We have recently ported our application from SDK 12.2 to SDK 14.2. The application is very power sensitive and uses a high connection interval and slave latency to minimise power consumption at idle:
#define MIN_CONN_INTERVAL (SECOND_1_25_MS_UNITS / 100) * 30 /*300ms*/ #define MAX_CONN_INTERVAL (SECOND_1_25_MS_UNITS / 100) * 39 /*390ms*/ #define SLAVE_LATENCY 4 #define CONN_SUP_TIMEOUT (6 * SECOND_10_MS_UNITS) /*6s*/
The application is a datalogger and occasionally needs to send a few thousand bytes of data in the peripheral -> central direction.
In SDK 12.2 we were able to send significant numbers of packets per connection interval during these data transfer events - we would see upwards of 30 packets transmitted per 300ms interval. This was maintained consistently no matter how much data was sent (we are careful to keep the TX buffer full so there is always more data to send).
However, in SDK 14.2 we see a similar number of packets going out during the very first connection interval but then the throughput rate drops off markedly, with only four packets per connection interval being transmitted from the second connection interval onwards. Essentially the link is now sitting idle for nearly 80% of the time, despite the TX buffer being full with data to send.
The only thing that has changed is the SDK - the hardware for the peripheral is the same and the software on the central (an iPhone) is unchanged. Similarly the software running on the peripheral is unchanged except for API tweaks as required to support the new SDK.
What could have caused this change in behaviour between SDK versions? Is there some additional config we need to set in SDK 14.2 to ensure we are not limited to transmitting only 4 packets per connection interval?
Any help gratefully received!