This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

ble_app_att_mtu_throughput: throughput calculation, packets per interval, Data Length Extension

Just stumbled upon a few questions, which are hard for me to answer at the moment.

I wanted to do some throughput measurements with the example taken from SDK 14.2.0 mentioned above on two nRF52840 DKs, which are using SoftDevice S140. My first plan was to calculate the theoretical throughput by (1000ms/<Connection Interval>) * 20 bytes * 8 bits/byte * <Packets per Connection Interval> and compare it with the measurement.

Another possibility is to use the way seen here. He is calculating the time to transmit a data packet (296bits/1Mbps), an empty packet and the time for a complete packet transfer (<time for data packet> + <T_IFS> + <time for empty packet>). After that, you can divide this time by the Connection Interval, what then leads to the technical maximum transfer "cycles" in one connection interval, which are higher than the numbers stated in SoftDevice Specifications. I got confused by his first example, where he has chosen (att_mtu: 23, conn_interval: 7.5, dle: on, phy: 1M). The result was about 232kbps and his calculated value 234kbps with an assumption of 11 packets per Connection Interval. With the same values, I am getting a throughput of 86kbps (first run) and 128kbps (following runs). These values seems to fit my first attempt to calculate the throughput. On the first try, a number of 4 packets and on the second try, a number 6 packets are transmitted, as these are multiples of ((1000ms/7.5ms) * 20 bytes * 8 bits/byte = 21.3kbps>).

So, my questions are:

  1. The SoftDevice Specifications for S132 (v5.1, p.56, Table 27) are giving the numbers of maximum packets per Connection Interval. Is there an equal information for S140 or are they the same in this regard?
  2. Why is the first test run different from the subsequent ones? Is the maximum number of packets per interval negotiated on the first run? But: the results are not always different for some settings.
  3. I wanted to "simulate" the throughputs of v4.0/v4.1, v4.2 and v5.0. So, I disabled Data Length Extension (DLE) for v4.1. A "definition" is given this blogpost:

Data length extension (DLE): This will set the on-air packet size. The maximum on-air packet size is 255 bytes. If we take away L2CAP header (4 bytes) and Link Layer header (4 bytes) we are left with an ATT packet of 247 bytes. For simplicity in this demo, you can turn DLE either on or off. If DLE is on, the size will be set to the ATT packet size plus the header bytes. This will avoid fragmentation of the ATT packet into several on-air data packets, to increase data throughput. DLE will affect the throughput greatly as larger packets will lead to more time sending actual data.

I am getting a throughput of ~128kpbs (att_mtu: 23, conn_interval: 7.5, dle: on, phy: 1M) and ~238kbps (att_mtu: 23, conn_interval: 7.5, dle: off, phy: 1M). Shouldn't be both of these measurements be the same? Why is there an influence by the DLE disabled with ATT MTU of 23 Bytes?

  1. Am I right, that the Connection Length Extension is kind of "levering out" the maximum number of packets per connection interval if it gets longer than a few milliseconds? If I understood it correctly, with CLE enabled the device will continue sending packets until a next interval/event is scheduled to save overhead.

Thanks and regards, Björn

  • My statement in 3) was incorrect, DLE might affect throughput. This is because the SoftDevice will always check if it is possible to receive and transmit a full packet before allowing a new packet pair to be exchanged. This is probably why you are seeing 64.5 kbps with DLE on. Do you get 192 kbps with DLE off?

  • With att_mtu = 23, phy = 1M, conn_interval = 7.5 and dle = off I am getting a throughput of 171.91 kbps. As mentioned before, in amts_evt_handler() on NRF_BLE_AMTS_EVT_TRANSFER_1KB I have added debug output for counted radio notification events. The number of radio events after 1020 or 1040 transmitted bytes is about 6 or 7 (independent of bytes sent). Which gives around 145.7 to 173.3 bytes per radio event, but actually I expected a multiple of 20 bytes (with att_mtu = 23 and dle = off). Nevertheless, when I disable the radio notifications, the throughput is now 192.4 kbps, as you mentioned in 1). Of course, this was the reason for the low throughput of 64.5kpbs. So, with radio notifications disabled the measurements with dle off and on are:

    att_mtu, conn_interval, dle, cle, phy, throughput
    23, 6, o̶f̶f̶ on, on, 1, 85.93
    23, 6, o̶n̶ off, on, 1, 192.4
    

    From that I conclude, that e̶̶n̶̶a̶̶b̶̶l̶̶i̶̶n̶̶g̶ disabling the data length extension will increase the number of packets transmitted from around 4 to 9. Which also equals your statement in 1). However, it is not clear for me, where the number of 4 packets comes from. In table 27 of S132 SDS v5.1 at tndist = 800 and conn_interval = 7.5 the packet transfers are stated with 6.

    edit: I've corrected the mistakes I made by twisting the values of my measurements as I wrote the comment…

  • I don't understand. How did you arrive at that conclusion? Without DLE you should get 192, with you will get less.

  • You will get less with DLE enabled because the SoftDevice will always check if it is possible to receive and transmit a full packet before allowing a new packet pair to be exchanged. With DLE enabled the full packet is longer. I'm not sure if I understand what isn't clear, sorry.

Related