This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

nRF Mcp Dfu freezes when »Number of packets« is set greater than about 356 packets

Hey there,

today I'd tested around with the Dfu service.

  • Peripheral is nRF51822 running Sdk 5.2.0 bootloader example (Softdevice 6)
  • Central is Android 4.3 running nRF Mcp 1.9.1
  • Testfile (nRF51822 Application) is Hrs Example (about 43k binary size)

In the settings of Mcp I found some Dfu options and I determined that the »Number of packets« value has an affect to the update data throughput.

My Android device connects with 48,75 ms connection intervall and transferes mostly about 4 packets at each connection intervall. So, the max throughput is theoreticly at

4 x 20 Byte / 48,75 ms = about 1.641 Byte/s

But when metering the Dfu throughput, I got just about 1.000 Byte/s. Well, after changing »Number of packets« from 20 to 300, I got about 1.600 Byte/s - nice.

But then, when I set »Number of packets« to greater values (e.g. to 400), the Dfu upload freezes at 16 % - and the same when I switching off the option »Packets receipt notification procedure«.

For »Number of packets«, I determined a treshold value of 356. This means that Dfu works with values up to 356, but for greater values the Dfu freezes at 16 %.

Ok, I don't expect higher throughput with higher »Number of packets« values. I'd just detected that issue and would report it.

Btw: Is this the right place for that kind of topics or should I rather put that in an issue tracker?

  • IMHO it's an issue of the S110 Softdevice.

    Within my own service, were I transfer lots of data from Central to Peripheral using »write without response«, I detected the same problem. After about 7.000 Byte (350 packets of each 20 Byte) the transfer stucks. My workaround is, that I insert a read after each 300 »write without response« packets.

    I'd tested that with a couple of Android devices / versions all with the same result. So my conclusion is, that this might be a problem in the Softdevice.

  • Hey Joe, thank's for your quick reply. If it was really a Softdevice issue, it might be fixed in Softdevice 7, isn't it?

  • The number of packets gives the number of packets between each packet receipt notification. Setting this to a very low number will indeed hurt throughput a bit, as the central will stop sending data until this notification is received. With the same setup as you (except Android 4.4.3) I was unable to reproduce your findings, but if you believe this is a problem with the SoftDevice, then you should report it as such.

  • Hi John and Joe,

    The root cause of the issues most probably lies in the application using the app_scheduler module the SDK. The application is not fast enough to pull out all the events from the scheduler queue because there is just too many BLE events generated by the SoftDevice (because of too many BLE packets being received in short time).

    In John's case, its the bootloader that is not fast enough (Each time it receives a firmware data packet, it needs time to process it and store it. That takes time.). That is the very reason why the 'Packet Receipt Notification' was introduced in the DFU application - to allow the application to tell the central that it is ready to receive the next set of firmware packets. And it looks like with John's connection interval, 300 is the maximum value for 'Number of packets' (This indicates the number of packets after which the Packet Receipt Notification is sent). Side note: I turned off the Packet Notification on my android app and I found out that my DFU will always get stuck at 76%.

    In Joe's case, the Read operation sent from the central, introduces a delay and that allows his application to pull out all the events from the Scheduler queue (this can be considered similar to the 'Packet Receipt Notification' of the DFU example).

    It is a good feedback for the android/ios app development team. The app can warn the user about setting too high value for 'Number of Packets' field.

    Also the SDK team is aware of this and will explore the possibility addressing this in the scheduler module in the upcoming releases.

    Cheers, Balaji

  • Hi, this is the mobile team writing. Indeed setting higher value of Packet Receipt Notification (PRN) causes the application to freeze. This however, in my opinion, has more to do with the Android/iOS BLE than software side. It appears like there is a queue for out-coming packages that has fixed size. When I call writeCharacteristic(..) I got a callback onCharacteristicWrite(..) but it appears like the callback is being invoked when the data are written into the queue, not send and acknowledged to the peripheral. Therefore when I disable notifications the progress goes very, very quickly and then stacks. It goes faster than the interval connection would allow. In my opinion it stacks when the the queue was full. This error is not supported by Android or iOS api. The transfer just stops and that's all. But this is just mine assumption.

    On different phones this happens in different moments. F.e. on Nexus 5 with Android 4.4.4 I was able to send the whole HRM application (~18kb) in about 4.3 sec with PRN disabled completely. The same application got stacked in about 20% on Nexus 4 with Android 4.4.4. With the new Android L on Nexus 5 the transmission also stacks at ~40% as far as I remember. That's why we have decided to make the PRN number configurable in the app so that our customers could set the one that works in their case. The default value 10 is working for sure.

    We knew about this problem and it has been noted on Google Play and (perhaps) App Store application details.

    Best Regards, Aleksander

    Edit: I've tried DFU with SoftDevice 7.0.0 on Nexus 4 (Andorid 4.4.4) and it got stacked at 50%. But, as I've said, it has reached 50% in about half a second after sending ~8kb of data.

    Edit 2: I've added the information to nRF Master Control Panel shown when PRM is being disabled or set to a big number. Thank you for the feedback. It should be available soon on version 1.10.0.

Related