This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Difference in my calculated data rate and data rate observed through att_mtu_throughput example. Why?

Let me first show what parameters I am choosing for calculating the data rates :

Conn interval is 7.5ms and connection length ext is off meaning it wont extend the connection interval from 7.5ms. After I 'run' the test I get

HOWEVER,

If you theoretically calculate the throughput of continuously streaming data with connection interval 7.5ms, 1M PHY, 27byte data length (meaning 41byte LL payload) and ATT MTU as 185bytes, you should get ~194Kbps. Is there some overhead that the firmware is not accounting for like cumulative time taken for printing all that out to terminal and all ? But still there shouldn't be around 60Kbps. Can you please explain what's wrong here ?

PS: I can provide the calculation if required. 

  • Hi Manish

     

    Manish Kaul said:

    Have you read this somewhere? If so, do you have a link?

    I just remember seeing the flag value as 0x00 using BLE Sniffer. But thanks for that information.

    That makes sense. They probably just use the same GUI regardless of the encryption status, and show the MIC as 0 when it is not present. 

    Manish Kaul said:
    Which one should be correct ?

    The throughput calculation should only cover attribute throughput, which is why in my calculations I simply divide 185 bytes by the seven packets needed to transmit it, and use that as a basis for the throughput speed.

    The link layer header is not a part of this. 

    Also keep in mind that the link layer header is included for every single packet, your formula seems to imply that it's only included on the first packet in the 7 packet series. 

    For the throughput calculation this doesn't matter, as it only covers attribute data as I mentioned, but for calculating the on-air time of the packets this is obviously important. 

    Best regards
    Torbjørn

  • The throughput calculation should only cover attribute throughput

    While you say that, you also include the 3 byte header in the 185byte ATT_MTU. If you're considering that, what makes you not consider the header at the L2CAP as well ?
    And if what you mean is only essential data will be considered and not headers, shouldn't the right calculation be :

    So, the simple formula is (effective data/total time). Total time=7500ms. Effective Data=(27*7-7)+(27*2-7)=229bytes=1832bits(7 bytes for ATT + L2CAP headers). Data Rate=1832/7500=244.2Kbps.

    your formula seems to imply that it's only included on the first packet

    Yes, my bad.

    for calculating the on-air time of the packets this is obviously important.

    By on air time you mean the total time which is inturn used in calculating throughput by calculating number of packets/connection interval?

  • Hi Manish

    Manish Kaul said:
    While you say that, you also include the 3 byte header in the 185byte ATT_MTU. If you're considering that, what makes you not consider the header at the L2CAP as well ?

    I double checked this in the code, and it seems the 3 byte ATT header is not included after all. This makes more sense, as it only includes actual data bytes in the throughput calculation, and not any of the headers. 

    Manish Kaul said:
    So, the simple formula is (effective data/total time). Total time=7500ms. Effective Data=(27*7-7)+(27*2-7)=229bytes=1832bits(7 bytes for ATT + L2CAP headers). Data Rate=1832/7500=244.2Kbps.

    The problem with this formula is that it assumes you send two of the 'initial' packets (the first packet in the series of 7) in every connection event, which is not the case. 

    If my math is correct you will send 2 initial packets only on 2 out of 7 connection events, the remaining 5 out of 7 you will only send 1. 

    That is why I simplify the formula by just dividing the number of data bytes by 7. Because the test is running over thousands of connection events this should be equivalent. 

    So my final formula would be (182/7) * 9 * 8 * (1000 / 7.5) = 249.60 kbps

    This is about 0.7% lower than the measured number, which I can't explain at this point. 

    Manish Kaul said:
    By on air time you mean the total time which is inturn used in calculating throughput by calculating number of packets/connection interval?

    Yes. My point is that the number of link layer header bytes is not relevant for the throughput calculation, since these bytes don't add to the data throughput, but they are relevant for how much time it takes to send the packet. 

    Best regards
    Torbjørn 

  • The problem with this formula is that it assumes you send two of the 'initial' packets (the first packet in the series of 7) in every connection event, which is not the case.
    If my math is correct you will send 2 initial packets only on 2 out of 7 connection events, the remaining 5 out of 7 you will only send 1. 

    I didn't understand what you meant ?

    Okay and can also please explain how do things change or how will you calculate data throughput keeping every parameter same except the data size at Link Layer is increased to 251 bytes ?

  • Hi Manish 

    Manish Kaul said:
    I didn't understand what you meant ?

    The continuous stream of 185 byte attribute updates is split into 7 packet sequences by the link layer, and 9 of them will fit into each connection event. 

    That means that on the first con event you will send an entire sequences of 7, plus 2 packets from the next sequence (so 2 initial packets in one con event). 

    On the next con event you will send the remaining 5 packets from the second sequence, plus 4 packets from the third sequence (so only one initial packet). 

    And so on, until the pattern repeats after 7 connection events (assuming no packet loss).  

    Manish Kaul said:
    Okay and can also please explain how do things change or how will you calculate data throughput keeping every parameter same except the data size at Link Layer is increased to 251 bytes ?

    With a data length larger than the attribute MTU you will get the entire attribute MTU into one packet, making the formula quite a bit simpler. 

    The total time of sending one 185 byte payload (including 10 bytes of LL overhead), receiving the ACK and so forth, should be 1870us. Then the theoretical number of packets pr con event is 4, but I expect the real number to be 2 or 3 depending on stack overhead.

    And the throughput will then be 182*8*2*(1000ms/7.5ms) = 388kbps, or 582kbps if 3 packets can fit (I need to test it to see which one it is). 

    Because of the overhead at the end of every connection event it makes sense to use a longer connection interval to maximize throughput, or use the connection event extension feature. In 1M modus you can get close to 700kbps throughput by using the optimal connection parameters. 

    Best regards
    Torbjørn 

Related