throughput output

Hi,

I have the following ouput using the throughput example:

[local] sent 660264 bytes (644 KB) in 4540 ms at 1190912 bps (145 kBps)
[peer] received 660264 bytes (644 KB) in 2706 GATT writes at 1135105 bps (138 kBps)

Altered output format a bit to give me the same for local and peer.

But I was wondering why there is a significant difference between transfer by local and reception by peer. Shouldn't this be about the same and not 7 kBps difference? What is transferred is received on the other end, so the same throughput, right?

Parents
  • Hi,

     

    You're right, the timing anchor is stored before receiving the first "payload" packet:

    https://github.com/nrfconnect/sdk-nrf/blob/v1.8.0/subsys/bluetooth/services/throughput.c#L61-L67

     

    The connection interval is a factor as well. I tested quickly by updating "clock_cycles" on the first entry of the "else", and lowered the connection interval; it showed a more equal result, but still differed a bit (1327 kBit/s vs. 1313 kBit/s at my end). Longer interval shows higher difference.

     

    Kind regards,

    Håkon

  • I played a bit with this and I do not really get what is going on. I also set the 'clock_cycles' on the fist entry of else:

    	if (len == 1) {
    		/* reset metrics */
    		kb = 0;
    		met_data->write_count = 0;
    		met_data->write_len = 0;
    		met_data->write_rate = 0;
    		clock_cycles = 0;
    	} else {
    		if(met_data->write_count == 0) {
    			clock_cycles = k_cycle_get_32();
    		}
    		met_data->write_count++;
    		met_data->write_len += len;
    		met_data->write_rate =
    		    ((uint64_t)met_data->write_len << 3) * 1000000000 / delta;
    	}

    and set the connection interval to 3200 units and I do not understand this result. Why would the reception rate be higher than the transmission rate?

    [local] sent 660264 bytes (644 KB) in 7254 ms at 745472 bps (91 kBps)
    [peer] received 660264 bytes (644 KB) in 2706 GATT writes at 1401786 bps (171 kBps)

    When I set the connection interval to 6 units. Now I get this as a result, which is more as expected. but still not the same.

    [local] sent 660264 bytes (644 KB) in 5170 ms at 1045504 bps (127 kBps)
    [peer] received 660264 bytes (644 KB) in 2706 GATT writes at 1018568 bps (124 kBps)
    

Reply
  • I played a bit with this and I do not really get what is going on. I also set the 'clock_cycles' on the fist entry of else:

    	if (len == 1) {
    		/* reset metrics */
    		kb = 0;
    		met_data->write_count = 0;
    		met_data->write_len = 0;
    		met_data->write_rate = 0;
    		clock_cycles = 0;
    	} else {
    		if(met_data->write_count == 0) {
    			clock_cycles = k_cycle_get_32();
    		}
    		met_data->write_count++;
    		met_data->write_len += len;
    		met_data->write_rate =
    		    ((uint64_t)met_data->write_len << 3) * 1000000000 / delta;
    	}

    and set the connection interval to 3200 units and I do not understand this result. Why would the reception rate be higher than the transmission rate?

    [local] sent 660264 bytes (644 KB) in 7254 ms at 745472 bps (91 kBps)
    [peer] received 660264 bytes (644 KB) in 2706 GATT writes at 1401786 bps (171 kBps)

    When I set the connection interval to 6 units. Now I get this as a result, which is more as expected. but still not the same.

    [local] sent 660264 bytes (644 KB) in 5170 ms at 1045504 bps (127 kBps)
    [peer] received 660264 bytes (644 KB) in 2706 GATT writes at 1018568 bps (124 kBps)
    

Children
  • It looks like your first result is shifted by up to one connection interval:

    Martijn Jonkers said:
    3200 units

    3200*1.25ms = 4 seconds.

    7.254s - 644kB/171kB = 3.5 s.

     

    In your updated approach, "delta" will now be incorrectly calculated from 0, and this is likely the reason for the strange timing result.

     

    Kind regards,

    Håkon

     

  • ah ok! The lower reception throughput I get now.

    I noticed it too, as you say, 'delta' is incorrect the first time with my implementation. But delta is only used for calculating 'met_data->write_rate' and that value is overwritten each time data comes in. So the final write_rate should be as expected. right? So, I suspect the reception rate should be correct. Which corresponds to 1.368 kbps, which is almost the max as described here: https://infocenter.nordicsemi.com/index.jsp?topic=%2Fsds_s140%2FSDS%2Fs1xx%2Fble_data_throughput%2Fble_data_throughput.html . And is the transfer rate I expected was possible.

    Because the clock_cycles is set on first reception with my implementation, this skips the 500ms delay on the master and 4 sec connection interval. So, the data rate is measured from first packet to last, without any connection interval delays. correct?

    I did notice in the original implementation the timing is started on the peer when the metrics are reset. But the central adds a 500ms delay after resetting the metrics to let any ble procedures complete. This 500ms is also added to the time it takes to receive data. But on the master this 500ms is skipped, beecause the start time is determined after the delay. This would always result in a lower reception rate compared to tranmission rate, right? reception time is always 500ms longer than transfer time.

  • another question: I notice the overall throughput goes down when I set a connection interval of 6 units. Is there a connection overhead for the connection interval? It looks like as if more data is transferred, even between data packets.

  • Hi,

     

    Martijn Jonkers said:
    another question: I notice the overall throughput goes down when I set a connection interval of 6 units. Is there a connection overhead for the connection interval? It looks like as if more data is transferred, even between data packets.

    With DLE, you want a larger connection interval, as this feature basically streams data within the same connection interval. You can see this from the table in the link you sent as well, that throughput is better with larger connection interval.

    Martijn Jonkers said:
    I noticed it too, as you say, 'delta' is incorrect the first time with my implementation. But delta is only used for calculating 'met_data->write_rate' and that value is overwritten each time data comes in. So the final write_rate should be as expected. right?

    If we're talking about "peer" print, this depends on the timing between receiving the "len == 1" scenario vs. when the data comes in.

    Martijn Jonkers said:

    Because the clock_cycles is set on first reception with my implementation, this skips the 500ms delay on the master and 4 sec connection interval. So, the data rate is measured from first packet to last, without any connection interval delays. correct?

    I did notice in the original implementation the timing is started on the peer when the metrics are reset. But the central adds a 500ms delay after resetting the metrics to let any ble procedures complete. This 500ms is also added to the time it takes to receive data. But on the master this 500ms is skipped, beecause the start time is determined after the delay. This would always result in a lower reception rate compared to tranmission rate, right? reception time is always 500ms longer than transfer time.

    For the "[local]" print, that is correct. On the peer side, it will by default, add that time between the events, due to the timing anchor.

    If you revert your changes, are you able to get the correct timing on the [local] side?

     

    Kind regards,

    Håkon

  • With DLE, you want a larger connection interval, as this feature basically streams data within the same connection interval. You can see this from the table in the link you sent as well, that throughput is better with larger connection interval.

    Okay, learned something new here. I thought the connection interval is only relevant in between actual data transmissions to maintain the connection when northing is communicated.

    For the "[local]" print, that is correct. On the peer side, it will by default, add that time between the events, due to the timing anchor.

    If you revert your changes, are you able to get the correct timing on the [local] side?

    I have '[local]' on the peer and master. I assume the '[local]' on the peer is the same as the '[peer]' output on the master?

    I reverted my changes. From what I understand, I will not get a correct timing on the peer. This is due to the 500ms delay introduced in the master. The peer has no knowledge of this delay after metrics reset. Is this a correct observation?

Related