Let me first show what parameters I am choosing for calculating the data rates :
Conn interval is 7.5ms and connection length ext is off meaning it wont extend the connection interval from 7.5ms. After I 'run' the test I get
If you theoretically calculate the throughput of continuously streaming data with connection interval 7.5ms, 1M PHY, 27byte data length (meaning 41byte LL payload) and ATT MTU as 185bytes, you should get ~194Kbps. Is there some overhead that the firmware is not accounting for like cumulative time taken for printing all that out to terminal and all ? But still there shouldn't be around 60Kbps. Can you please explain what's wrong here ?
PS: I can provide the calculation if required.
SDK for the att_mtu_throughput example is 16.0.0, using PCA10040 nRF52 Development Kits as tester and dummy boards.
How many packets are you calculating per connection interval?
When you have two nRF52 devices talking to each other there is really no upper limit on the number of packets you can send in one connection interval, other than the total time it takes to send them. In other words you should be able to send as much as 11-12 packets pr connection interval, depending on the length of each packet.
With this in mind my calculations point to something around 250 kbps, assuming you don't have any packet loss.
packets per interval are maximum as it's a continuous stream. If I've set a conn. interval of 7.5ms and in this example also set the connection interval extension OFF, I would assume it won't extend the length of the connection beyond 7.5ms. So depending on the ATT_MTU, Data length at LL, I believe it would take the max packets that can transfer in one interval. With the above mentioned input parameters (MTU=185bytes, data length=27, conn interval=7.5ms, conn_len_ext=off) total time it takes for 1 entire data packet to transfer is 4956us (including IFS and response packet timings) and hence in a 7.5ms(7500us) interval I take 2 packets in that connection interval and that corresponds to ~194Kbps of actual throughput.
Manish Kaul said:I would assume it won't extend the length of the connection beyond 7.5ms.
Manish Kaul said:total time it takes for 1 entire data packet to transfer is 4956us
When you say '1 entire packet', I assume you mean the entire 185 byte attribute MTU?
If my math is correct each 185 byte attribute MTU would have to be split in 8 on air packets, with a data length of only 27 bytes.
The first packet would contain 20 bytes of attribute data plus the attribute and L2CAP headers. The following 6 packets would contain 27 bytes of attribute data, while the last packet would contain the remaining 3 bytes of attribute data.
The total time to send this, including 2x TIFS and the empty packet from the peer, should be 5216us if my math is correct.
A single 185 byte payload pr 7.5ms interval is close to the 194Kbps throughput you mention, but since there is still 2284us free after sending the 8 packets I mentioned earlier the stack should be able to start on the next 8 packet sequence before the connection event is over, which is why the actual throughput will be higher.
ovrebekk said:When you say '1 entire packet', I assume you mean the entire 185 byte attribute MTU?
Yes, that's what I mean.
ovrebekk said: split in 8 on air packets,
it should be 7. On the L2CAP layer, we'd have (185+4)bytes of data to be fragmented in packets of 27 down at LL Layer. Hence 189/27=7.
Time taken for entire journey of ONE 41 byte (14+27) packet at LL layer is 708us(41*8+2*150+80)us and hence for 7 packets 708*7us=4956us.Now for the remaining 2544us (or 2284us acc to your calculation) as per the theory the device shouldn't transmit until the beginning of next conn interval to transmit a 'whole' packet 'within' a connection interval.
Is what you're saying that, connection interval in a way extends by the remaining time amount(2544us/2284us) or are you saying that only data that can be transmitted in the remaining time of the conn interval(2544us/2284us) is transmitted and the rest is done at the onset of next interval ? Because this makes me question the whole concept of connection interval then.
What exactly is implemented in this stack and is it a standard practice or way of measuring the throughput?