This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Theoretical throughput doesn't match experiments done via ble_app_att_mtu_throughput example

Hi guys,

Right now, I'm trying to do some throughput measurements via the experimental ble_app_att_mtu_throughput example. This using the SDK 14.2 and S132 on two nRF52832 DK's. Alongside that, I'm trying to couple these experiments to a theoretical calculation of the throughput in several cases, this via information sited here and here. The information on both sites doesn't agree on everything but I tried to take the best out of both.

At first, the experiments didn't seem to give the results I was getting in theory. After some search on the forum, I ran into this question and this question. The first question confirmed for me that turning DLE on or off, with max. MTU being only 23, can indeed influence the throughput because the Softdevice checks if it can send and receive a full packet (so with the configured max. L2CAP length of DLE) before it actually sends a real packet, even if that real packet is much smaller. Also, the question clears out some bugs in the code.

Now the experiments are giving me 193.41 Kbps for DLE Off and 85.69 Mbps for DLE On. The other parameters are in both cases the same. This is connection interval of 7.5 ms, connection event extensions ON, max MTU of 23 and PHY 1 Mbps.

The theoretical calculations I do, now look like this:

For DLE OFF

Sending and receiving of a full packet (DLE OFF, so L2CAP length of 27 bytes):

Rx Full (27 bytes L2CAP) + IFS + Tx Full (27 bytes L2CAP) + IFS

(Ofcourse the real communication will be a much smaller RX packet, but this is what I believe the SoftDevice does as check first before scheduling another packet into the same connection event)

  • Rx Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 27 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 328 µs
  • TX Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 27 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 328 µs
  • IFS = 150 µs

So in total for one RX and TX, this gives 956 µs. The amount of packets that are maximally put in one full connection event by the SoftDevice is 7.5 ms / 956 µs = 7.84 => 7 packets. This gives a theoretical throughput of (7 packets * 20 bytes * 8) / 7.5 ms = 149.33 kbps. This is clearly not the same as the experimental throughput. To get to around 193.41 kbps, I would need 9 packets per connection event instead of 7.

For DLE ON

Sending and receiving of a full packet (DLE ON, so L2CAP length of 251 bytes, as configured by example when DLE is put ON):

Rx Full (251 bytes L2CAP) + IFS + Tx Full (251 bytes L2CAP) + IFS

  • Rx Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 251 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 2120 µs
  • TX Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 251 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 2120 µs
  • IFS = 150 µs

So in total for one RX and TX, this gives 4540 µs. The amount of packets that are maximally put in one full connection event by the SoftDevice is 7.5 ms / 4540 µs = 1.65 => 1 packet. This gives a theoretical throughput of (1 packet * 20 bytes * 8) / 7.5 ms = 21.33 kbps. This is clearly not the same as the experimental throughput. To get to around 85.69 kbps, I would need 4 packets per connection event instead of 1.

So, ultimately, my question is: what am I doing/understanding wrong? Am I running the experiments wrong? Do I understand some theory about BLE or the SoftDevice or ... wrong?

Any help would be greatly appreciated.

Thanks in advance.

Kind regards,

Mathias

Parents Reply Children
No Data
Related