This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Theoretical throughput doesn't match experiments done via ble_app_att_mtu_throughput example

Hi guys,

Right now, I'm trying to do some throughput measurements via the experimental ble_app_att_mtu_throughput example. This using the SDK 14.2 and S132 on two nRF52832 DK's. Alongside that, I'm trying to couple these experiments to a theoretical calculation of the throughput in several cases, this via information sited here and here. The information on both sites doesn't agree on everything but I tried to take the best out of both.

At first, the experiments didn't seem to give the results I was getting in theory. After some search on the forum, I ran into this question and this question. The first question confirmed for me that turning DLE on or off, with max. MTU being only 23, can indeed influence the throughput because the Softdevice checks if it can send and receive a full packet (so with the configured max. L2CAP length of DLE) before it actually sends a real packet, even if that real packet is much smaller. Also, the question clears out some bugs in the code.

Now the experiments are giving me 193.41 Kbps for DLE Off and 85.69 Mbps for DLE On. The other parameters are in both cases the same. This is connection interval of 7.5 ms, connection event extensions ON, max MTU of 23 and PHY 1 Mbps.

The theoretical calculations I do, now look like this:

For DLE OFF

Sending and receiving of a full packet (DLE OFF, so L2CAP length of 27 bytes):

Rx Full (27 bytes L2CAP) + IFS + Tx Full (27 bytes L2CAP) + IFS

(Ofcourse the real communication will be a much smaller RX packet, but this is what I believe the SoftDevice does as check first before scheduling another packet into the same connection event)

  • Rx Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 27 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 328 µs
  • TX Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 27 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 328 µs
  • IFS = 150 µs

So in total for one RX and TX, this gives 956 µs. The amount of packets that are maximally put in one full connection event by the SoftDevice is 7.5 ms / 956 µs = 7.84 => 7 packets. This gives a theoretical throughput of (7 packets * 20 bytes * 8) / 7.5 ms = 149.33 kbps. This is clearly not the same as the experimental throughput. To get to around 193.41 kbps, I would need 9 packets per connection event instead of 7.

For DLE ON

Sending and receiving of a full packet (DLE ON, so L2CAP length of 251 bytes, as configured by example when DLE is put ON):

Rx Full (251 bytes L2CAP) + IFS + Tx Full (251 bytes L2CAP) + IFS

  • Rx Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 251 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 2120 µs
  • TX Full time = ( (1 (Preamble) + 4 (Access Address) + 2 (LL Header) + 251 (L2CAP) + 4 (MIC, because encryption seems to be done in the example as well) + 3 (CRC) bytes) * 8 ) / 1 Mbps = 2120 µs
  • IFS = 150 µs

So in total for one RX and TX, this gives 4540 µs. The amount of packets that are maximally put in one full connection event by the SoftDevice is 7.5 ms / 4540 µs = 1.65 => 1 packet. This gives a theoretical throughput of (1 packet * 20 bytes * 8) / 7.5 ms = 21.33 kbps. This is clearly not the same as the experimental throughput. To get to around 85.69 kbps, I would need 4 packets per connection event instead of 1.

So, ultimately, my question is: what am I doing/understanding wrong? Am I running the experiments wrong? Do I understand some theory about BLE or the SoftDevice or ... wrong?

Any help would be greatly appreciated.

Thanks in advance.

Kind regards,

Mathias

  • Yeah, wasn't sure about the empty packet being encrypted or not. So thats 80 µs for an empty packet under the given conditions, instead of 112 µs.

    Aha, I think I get were I'm wrong now. The check is done after each exchange and only for the next exchange. That's what I forgot I think. So:

    For DLE OFF

    With the 80 µs for the empty packet, this gives 708 µs for one real exchange (full one way, empty the other) and 956 µs for the full both ways check exchange. 708 µs fits 10 times into 7.5 ms. If I substract it 9 times from 7.5 ms, I get 1128 µs. 956 µs still fits there, so here a 10th packet would theoretically still be possible. But as you say, there is only around 150 µs of extra time here so in a non ideal situation this will be 9 then as we get in the experiment. 2nd comment following below.

  • Here I indeed forgot that DLE ON ONLY influences the SoftDevice check, not the real communication because indeed the MTU is still only 23.

    For DLE ON

    With the 80 µs for the empty packet and DLE only influencing the full packet exchange check , this gives 708 µs for one real exchange (full one way, empty the other) and 4540 µs for the full both ways check exchange. Here substracting 7.5 ms with 708 µs before 4540 µs doesn't fit in it anymore, this is possible 5 times but indeed again the last substraction is already from 4668 µs which is already close to 4540 µs so in a non ideal situation with other factors influencing, 4 times or thus 4 packets is more reachable.

    Do I understand it correctly now? Thanks for all your help already Petter, as always!

    Kind regards

  • Correct! No problem :) I didn't get it correct the first time I looked at it either ;)

Related