This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Long Write once more

Hello,

I need to send packets that are larger than 20 bytes to once characteristic with a variable length. When I set the variable length to 100 bytes, the Attribute table size grows. As far as I can see from the message sequence charts (Is there really no comprehensive documentation available that details the writing and reading procedures in a good textual form??), the application will have to handle a BLE_USER_MEM_REQUEST event to provide memory for the operation. Now, my question is, why do I need to provide additional memory? Isn't the write performed to the attribute table? Or is this memory not used because the peer device might cancel the operation? I do not really need to have a consistent content in the attribute, because I only use the attribute to transfer data. Can I reuse this space?

Thanks, Marius

Parents
  • The underlying procedure for long write/prepared writes allows a lot of possibilities in how data is written, not necessarily for the same handle, not necessarily in order, possibly more than one value written each of which may need to be committed separately. The higher level specs restrict this a bit, although I don't find them particularly clear. So a proper implementation of a prepared write must present an ordered list of the handle/offset/data/length written for each piece at commit time and it's up to the implementation to then decide what that data stream means in terms of what parts of what characteristics are written. There is also a spec requirement that if a write is cancelled the characteristic data must not be altered. So you need extra memory to queue up the data, plus the handle/offset information to be processed at commit time.

    However you have options. If you return NULL for the BLE_USER_MEM_REQUEST you will get the individual pieces of the write yourself. If you want to put them straight into a buffer and then write that buffer at the end, you can do that, saves memory because you're just storing the data as it comes in, not the entire list of commits. You could also write the pieces directly to the characteristic, however that may not be entirely compliant if you change the characteristic before a write is committed. If you can't read that characteristic, but only write it, that may not really be an issue.

    Finally if you're using such long characteristics, keeping them in stack memory isn't necessarily a good idea, keep them in user memory and use the BLE_GATTS_VLOC_USER. I actually use this flag almost exclusively as I inevitably find that I need a copy of the data in my user structure anyway, so there's no point wasting stack memory on having two of them.

    There is a message chart showing what happens if you return NULL for the memory request and the messages/calls you'll get afterwards. One slightly annoying issue is that you don't get told at the point of the memory request what characteristic is being written, again that's because technically the underlying procedure allows more than one to be written during a long write so there's no way of knowing. This means that if you handle the memory for any long write, you have to handle the memory for ALL long writes. Note that it's quite valid for a short characteristic to be written with a long write, although practically speaking I've never seen it happen, clients always use a normal write for a <20 byte characteristic. I'll warn you that coding your own prepared write stuff is pretty tedious, it took me a day to get it working fully consistently and every time I look at the code I cringe.

Reply
  • The underlying procedure for long write/prepared writes allows a lot of possibilities in how data is written, not necessarily for the same handle, not necessarily in order, possibly more than one value written each of which may need to be committed separately. The higher level specs restrict this a bit, although I don't find them particularly clear. So a proper implementation of a prepared write must present an ordered list of the handle/offset/data/length written for each piece at commit time and it's up to the implementation to then decide what that data stream means in terms of what parts of what characteristics are written. There is also a spec requirement that if a write is cancelled the characteristic data must not be altered. So you need extra memory to queue up the data, plus the handle/offset information to be processed at commit time.

    However you have options. If you return NULL for the BLE_USER_MEM_REQUEST you will get the individual pieces of the write yourself. If you want to put them straight into a buffer and then write that buffer at the end, you can do that, saves memory because you're just storing the data as it comes in, not the entire list of commits. You could also write the pieces directly to the characteristic, however that may not be entirely compliant if you change the characteristic before a write is committed. If you can't read that characteristic, but only write it, that may not really be an issue.

    Finally if you're using such long characteristics, keeping them in stack memory isn't necessarily a good idea, keep them in user memory and use the BLE_GATTS_VLOC_USER. I actually use this flag almost exclusively as I inevitably find that I need a copy of the data in my user structure anyway, so there's no point wasting stack memory on having two of them.

    There is a message chart showing what happens if you return NULL for the memory request and the messages/calls you'll get afterwards. One slightly annoying issue is that you don't get told at the point of the memory request what characteristic is being written, again that's because technically the underlying procedure allows more than one to be written during a long write so there's no way of knowing. This means that if you handle the memory for any long write, you have to handle the memory for ALL long writes. Note that it's quite valid for a short characteristic to be written with a long write, although practically speaking I've never seen it happen, clients always use a normal write for a <20 byte characteristic. I'll warn you that coding your own prepared write stuff is pretty tedious, it took me a day to get it working fully consistently and every time I look at the code I cringe.

Children
Related