This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

ble_app_att_mtu_throughput: throughput calculation, packets per interval, Data Length Extension

Just stumbled upon a few questions, which are hard for me to answer at the moment.

I wanted to do some throughput measurements with the example taken from SDK 14.2.0 mentioned above on two nRF52840 DKs, which are using SoftDevice S140. My first plan was to calculate the theoretical throughput by (1000ms/<Connection Interval>) * 20 bytes * 8 bits/byte * <Packets per Connection Interval> and compare it with the measurement.

Another possibility is to use the way seen here. He is calculating the time to transmit a data packet (296bits/1Mbps), an empty packet and the time for a complete packet transfer (<time for data packet> + <T_IFS> + <time for empty packet>). After that, you can divide this time by the Connection Interval, what then leads to the technical maximum transfer "cycles" in one connection interval, which are higher than the numbers stated in SoftDevice Specifications. I got confused by his first example, where he has chosen (att_mtu: 23, conn_interval: 7.5, dle: on, phy: 1M). The result was about 232kbps and his calculated value 234kbps with an assumption of 11 packets per Connection Interval. With the same values, I am getting a throughput of 86kbps (first run) and 128kbps (following runs). These values seems to fit my first attempt to calculate the throughput. On the first try, a number of 4 packets and on the second try, a number 6 packets are transmitted, as these are multiples of ((1000ms/7.5ms) * 20 bytes * 8 bits/byte = 21.3kbps>).

So, my questions are:

  1. The SoftDevice Specifications for S132 (v5.1, p.56, Table 27) are giving the numbers of maximum packets per Connection Interval. Is there an equal information for S140 or are they the same in this regard?
  2. Why is the first test run different from the subsequent ones? Is the maximum number of packets per interval negotiated on the first run? But: the results are not always different for some settings.
  3. I wanted to "simulate" the throughputs of v4.0/v4.1, v4.2 and v5.0. So, I disabled Data Length Extension (DLE) for v4.1. A "definition" is given this blogpost:

Data length extension (DLE): This will set the on-air packet size. The maximum on-air packet size is 255 bytes. If we take away L2CAP header (4 bytes) and Link Layer header (4 bytes) we are left with an ATT packet of 247 bytes. For simplicity in this demo, you can turn DLE either on or off. If DLE is on, the size will be set to the ATT packet size plus the header bytes. This will avoid fragmentation of the ATT packet into several on-air data packets, to increase data throughput. DLE will affect the throughput greatly as larger packets will lead to more time sending actual data.

I am getting a throughput of ~128kpbs (att_mtu: 23, conn_interval: 7.5, dle: on, phy: 1M) and ~238kbps (att_mtu: 23, conn_interval: 7.5, dle: off, phy: 1M). Shouldn't be both of these measurements be the same? Why is there an influence by the DLE disabled with ATT MTU of 23 Bytes?

  1. Am I right, that the Connection Length Extension is kind of "levering out" the maximum number of packets per connection interval if it gets longer than a few milliseconds? If I understood it correctly, with CLE enabled the device will continue sending packets until a next interval/event is scheduled to save overhead.

Thanks and regards, Björn

    1. It does, when radio notification is enabled. When radio notification is enabled it will reduce the maximum number of packets exchanged:

    The SoftDevice will limit the length of a Radio Event (tradio), thereby reducing the maximum number of packets exchanged, to accommodate the selected tndist.

    When the radio notification is disabled the maximum throughput is listed here. These are measured values, 192 kbps means that 9 packets is transferred every connection interval.

    1. I don't really know why you get that result, I think you will have to provide some more information on what exactly you are doing.

    2. As long as you have an ATT MTU of 23 DLE shouldn't matter. Again, I need some more information on what exactly you are doing to answer why you get 238 kbps.

    3. Correct.

  • Regarding 1): I need to get a little bit deeper into Radio Notifications. Maybe some of my questions will be answered. If not, I will come back to this thread.

    Regarding 2) and 3): I used the ble_app_att_mtu_throughput example from SDK 14.2.0. I have written a Python "interface" for automated test runs. At the moment I am just sending UART strings to the device for setting the parameters and starting test runs. Just few minutes ago I tried the example from the SDK again. I am getting the same results, but because of a strange reason.

    1. Setting parameters to att_mtu = 23, conn_interval = 7.5, dle = on, cle = on, phy = 1M will give ~86kbps on the first run after powering the devices.

    2. In a second run right after the first one, I am getting ~128kbps. But now, as I was doing it manually via terminal emulator, I have seen that the PHY was automatically updated to 2M after starting the test. That's why I am getting different results on successive runs. But why is it adapting the PHY by itself? I expected to get at least 128 kbps with these settings, but with dle set to "off" instead of "on" (see my calculation in first post). If I start a test run with dle = off, I will only get

      [00000003] app: Preparing the test. [00000004] app: Starting advertising. [00000008] app: Starting scan. [00000656] app: Device "Nordic_ATT_MTU" found, sending a connection request. [00002085] app: Connected as a central. [00002085] app: Discovering GATT database... [00002145] app: ATT MTU exchange completed. MTU set to 23 bytes. [00002629] app: Data length updated to 27 bytes. [00002287] app: AMT service discovered at peer. [00003021] app: Notifications enabled.

    and nothing happens.

  • There is something fishy going on. Looking into it. Will get post here when I have something.

  • Seems to be a couple of bugs. What if you comment out:

    scan_start();
    

    on line 1010 in main.c

    and

    m_test_params.phys.tx_phys = BLE_GAP_PHY_2MBPS;
    

    on line 1022 in main.c

  • Thanks for your reply. Totally overlooked this line. Of course, everytime the test is started, the PHY parameter is overwritten. Now it has a constant behaviour, but the throughput now dropped to ~64.5kbps with att_mtu = 23, conn_interval = 7.5, dle = on, cle = on, phy = 1M.

    Some other measurements are:

    att_mtu, conn_interval, dle, cle, phy, time, sent, received, throughput
    23, 6, on, on, 1, 130.203, 1048580, 1048580, 64.42
    247, 6, on, on, 1, 32.23, 1048712, 1048712, 261.98
    247, 6, on, on, 2, 10.688, 1048712, 1048712, 784.96
    23, 320, on, on, 1, 36.438, 1048580, 1048580, 230.21
    247, 320, on, on, 1, 11.533, 1048712, 1048712, 727.45
    247, 320, on, on, 2, 6.743, 1048712, 1048712, 1244.20
    

    Switching CLE off with:

    static test_params_t m_test_params =
    {
        [...]
        .conn_evt_len_ext_enabled = **false**,
        [...]
    };
    

    also has no effect.

    I also added the SoftDevice Radio Notification feature to this example to count the number of packets transmitted. From that I can calculate the number of bytes per radio event (244 bytes for att_mtu = 247), but never something like 20 byte for att_mtu = 23. I would guess this is a result of the connection length extension, right?

Related