This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

sending lots of data from peripheral to central, queued writes

Hi. I have a peripheral application that spends a lot of time doing stuff, but not in a BLE connection. In order to debug it, I have it use nrf_log with a backend that just keeps the log in RAM. On a BLE connection, I want to read back that whole log from RAM to the central, which will then stick it in the cloud for me to check for errors, warnings and any expected behaviour. The log could grow to 36 kB while the device is disconnected.

I use a notify only characteristic for the log. The central notifies on this soon after it connects. I can increase the MTU size to read bigger chunks back, but I still need to iterate over many chunks of the log, say, 72 chunks of 512 bytes or 144 chunks of 256 bytes at a maximum.

What's the best way to do this? The integrity of the data isn't that important. It's important the application is able to handle events, use peripherals and maybe even run the scheduler while the log is being read back.

At first glance, the queued writes module looked useful for this, but I've found the example app that uses it confusing. There's little more to explain it than the sequence diagrams in the Soft Device docs. In ble_app_bms, on a write event the app simply delegates to the bms (bond management service), which does stuff with bonds. I can't see where it actually writes anything to a characteristic. I can't see where a bunch of writes are queued, either. Perhaps it's not the right tool for the job.

Parents Reply Children
No Data
Related