Beware that this post is related to an SDK in maintenance mode
More Info: Consider nRF Connect SDK for new designs
This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Connection failure when sending and receiving data simultaneously with SoftDevice 6.0 and SDK 15

I’m experiencing random connection failure when transferring data (both ways at the same time) between my peripheral (Slave) and central (Master) device.
The problem appeared just after upgrade to the SoftDevice 132 (version 6.0) and SDK (version 15).
There was no such issues with previous versions of SoftDevice (5.0) and SDK (14).

The problem occurs when 2 devices (Master and Slave) starts to stream data bi-directionally with a speed of about 8 kB/second each.
It takes from few seconds up to few minutes when connection fails and both devices starts to report an error (NRF_ERROR_RESOURCES) at the same time.
Furthermore, once this situation happen, the connection on master device seems to be dead completely.
The device is not capable to send/receive notifications anymore (using different characteristic) and is not "aware" of any connections events.
For example, the Slave device can be powered-off and the Master device does not receive “disconnection” event.

There is no issues with connection, pairing or bonding.
Sending notifications, indications or small amounts of data from one device to another seems to be ok too.
The problem starts when devices goes into fast “streaming mode” and larger amounts of data are exchanged.

Both devices are based on nrf52382 and using latest SoftDevice 6.0/SDK 15.
Slave device uses “interrupt dispatch model” (NRF_SDH_DISPATCH_MODEL_INTERRUPT).
Master device uses RTOS and “polling dispatch model” (NRF_SDH_DISPATCH_MODEL_POLLING)

Both devices uses custom Services which are very similar to the “ble_nus” and “ble_nus_c” from the SDK.
Functions used for sending data are: sd_ble_gatts_hvx and sd_ble_gattc_write.
Connection parameters (including negotiated ones) are as follows:
Data length 251 bytes
ATT MTU 247 bytes
PHY set to 2 Mbps
MIN_CONNECTION_INTERVAL 10 ms
MAX_CONNECTION_INTERVAL 20 ms
SLAVE_LATENCY 0
SUPERVISION_TIMEOUT 4000ms
NRF_SDH_BLE_GATT_MAX_MTU_SIZE 247
NRF_SDH_BLE_GATTS_ATTR_TAB_SIZE 1408
NRF_SDH_BLE_GAP_EVENT_LENGTH 400

An observation has been made (but not 100% confirmed):
When sending data in packages by 244 bytes (247-3) the connection seems to be stable.
Occasionally, “NRF_ERROR_RESOURCES” errors appears and this is normal (I know I need to wait for the BLE_GATTS_EVT_HVN_TX_COMPLETE / BLE_GATTC_EVT_WRITE_CMD_TX_COMPLETE events) but connection stays alive for long time.
When data is sent in smaller “packages” (by 160 bytes) the connection fails usually after few seconds.

I’ve tried to use nRF Sniffer to catch the moment when connection fails.
It wasn’t easy, as the tool is not upgraded and has many limitations. However, few screenshots has been made.
First picture shows the moment when Master device stops to respond (pos no. 33071).


Other pictures shows very last packets that has been sent over.

I’ve spent few days to investigate the problem in BLE parameters, memory leaks, RTOS tasks and priorities, stack sizes and in many other places.
Please give a hint for the solution.

PS: The solution isn’t the downgrade to the SoftDevice 132 (version 5.0) and SDK (version 14) as those version has other pairing/bonding issues with latest Android devices.

Parents
  • Hi JRRSoftware,

    I was able to run some tests and i found it very clear that your timer deamon task at priority (2) was starving your dummy task and softdevice task at the same priority.

    #define configTIMER_TASK_PRIORITY ( 2 )

    Remember that in FreeRTOS configuring the kernel is very important to suit your needs. Since you have many "runnable" state tasks at the same time with same priority, FreeRTOS scheduler will always choose one task to run and starve the rest as long as the first task suspends itself. The reason is that you have set the timeslicing of equal priority tasks to 0. Your configuration for this is as below

    #define configUSE_TIME_SLICING 0

    Quoting the text from FreeRTOS documentation

    configUSE_TIME_SLICING

    By default (if configUSE_TIME_SLICING is not defined, or if configUSE_TIME_SLICING is defined as 1) FreeRTOS uses prioritised preemptive scheduling with time slicing. That means the RTOS scheduler will always run the highest priority task that is in the Ready state, and will switch between tasks of equal priority on every RTOS tick interrupt. If configUSE_TIME_SLICING is set to 0 then the RTOS scheduler will still run the highest priority task that is in the Ready state, but will not switch between tasks of equal priority just because a tick interrupt has occurred.

    So if you set the timeslicing to 1 and leave the preemption to 1, then you should not see this problem.

    configUSE_PREEMPTION

    1

    configUSE_TIME_SLICING

    1

    I guess some timing has changed with softdevice in few microseconds with relation to the notification for us to be able to trigger this corner case. Never the less, please choose your task priorities very wisely, they are very crucial part of your application design.

     

  • Hello Aryan,

    We are experiencing a very similar problem.

    In our case our task priorities are correct (the softdevice task is the highest priority).  Our failure is caused by an ISR that is starving the softdevice task.  That ISR is consuming more than 50% of available CPU cycles.

    We are in process of refactoring the design to correct this.  In doing so, questions have come up that we hope you can answer.

    I'll explain...

    We have one prototype ISR implementation that consumes very close to 0% of available CPU cycles.  That version seems to resolve the problem described in this forum.  Alas, this version is not optimal for our application.  We need to do more work in the ISR.

    We have made another prototype ISR implementation that is better for our application.  That version consumes about 5% of the CPU. Alas, it appears that the connection problem is back with this version.

    So here are the questions...

    Is the problem caused by the % of CPU stolen from the softdevice task?  Or is it the duration?  E.g. our ISR implementation that consumes %50 of the CPU only keeps the CPU for about 5 uS, but does so every 10 uS. The implementation that consumes %5 of the CPU keeps the CPU for about 50 uS and does so once every 1 mS or so.

    My guess is that both are bad.  We will refactor once again and put most of the 50 uS processing in a task that is lower priority than the softdevice task.  My guess is this will fix things.

    The question though, is how long is "too long" for a user ISR to preempt the softdevice task?  It appears that 50 uS is too long.

    Comments?

    Thanks!

    Bruce

  • Hello Aryan,

    I've been away on a very welcome vacation.  I'm now back to work on this issue.

    Our ISR is running at priority 6.

    This is lower than all the softdevice interrupts.  But, by definition, it is higher priority than the ble task.  It preempts the ble task, thus causing the same symptoms as described by JRRSoftware.  In his case it was a FreeRTOS task that was preempting the ble task. In our case it is this ISR.

    Our plan is to reimplement our ISR to do its time consuming work in a task that is lower priority than the ble task.  All the ISR will do is wake the bottom-half ISR task.  I predict that this will fix the problem.  I'll let you know what I find.

    Thanks!

    Bruce

  • Here is an update on our end.

    Ends up we didn't need to reimplement our ISR.  Our mobile app developer found a bug in his code that explained our remaining symptoms.

    I guess it's OK for an ISR to run for 50 uS or so.

    Thanks for the help!

  • I think the subject shoudn't be closed yet. I'm happy that Bruce found the solution for his problem but mine still isn't solved yet.
    Did you checked the sample code I provided some time ago that replicates the issue?

    If not, here is the link:
    https://drive.google.com/file/d/1cXsduRdlnS_2xqVA3PXpB3KtY_KcU9nZ/view?usp=sharing

    It clearly shows that assertion isn't thrown when it should.

    Jack

Reply Children
No Data
Related