This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

9 packets in a connection interval, but then master waited 61 msec

Hi,

I used one nRF52840 dongle and one DK, and placed one sniffer in between to caputure the packets.

The connection interval was set at 7.5 msec, GAP event length is 6 (which is 7.5 msec), MTU is 23 and data length is 27, SoftDevice s140 7.2.0.

Below sniffer captured, shows

- 9 packets in one connection interval

- the highlighted slave response with MD=1, more data, but master didn't reply, instead it waited 61 msec

I read a few Nordic articles describe the maximum packets per connection interval is 6, but there is 9 in this case. Also why master ignored slave's MD=1 packet, instead waited 61 msec.

Could you help me to understand it?

Also, we are trying to achive highest packet per second, so in theory if we set the MTU small enoough and we use 7.5 msec connection interval, can we achive 6 packets per connection interval and reach 800 packets per second between two devices? 

1000 / 7.5 * 6 = 800 PPS

  • Hello,

    It looks like your central drops out of the connection for a while (but the devices doesn't disconnect).

    What sort of application is your central running? Is there a chance that there are some flash operations that may occupy the central for a while?

    You mention that you want a certain number of packets per second. While it is possible, it is usually not the recommended way if you want to increase your throughput.

    I can strongly suggest that you look into merging more of your packets, so that you can send longer packets, and thus, send them less frequently.

    This way, you can get up to 700kbps or 1.3 Mbps, depending on whether you are using 1MBPS or 2MBPS PHY. 

    The reason I mention this is that longer packets will mean a lot less headers compared to the payload.

    But is your central doing any flash operations while they are connected?

    Best regards,

    Edvin

  • Hi Edvin,

    Thank you very much for your quick help.

    The central is running examples\ble_central\ble_app_multilink_central with small changes, the only thing i can think of is, after it connect to peripheral it start scan again due to it was designed to support 8 peripheral devices in the original example. I added additional 1 sec repeated timer to calculate the PPS (packet per second). I don't think there is flash operation involved in the original code.

    Thank you for the advice on the data throughput, it's very helpful. We are trying to test with minimum latency then throughput, so we start with small packet with small connection interval, hope to get the packet as fast as possible, My expectation is that we might end up with some packet size in between such as 80 bytes payload to achive both latency and throughput goal.

    Look at the captured file, it forms a pattern that it sends 9 packets than waited between 16 msec to 60 msec and send another 9 packets and wait another long window. 

    I attached the captured wireshark filehere.

    By the way, is 9 packets per connection interval expected? or it is out of spec?

    ble_test_mtu23.pcapng

  • jshen said:
    the only thing i can think of is, after it connect to peripheral it start scan again due to it was designed to support 8 peripheral devices in the original example

     Ok, that makes sense. The nRF52 only has one radio, so the softdevice has to choose what to do at a given point in time. If you want it to maintain a connection and at the same time scan for SCAN_WINDOW time every SCAN_INTERVAL, then that is not possible, so the softdevice needs to choose whether to scan or maintain the connection. You can read about the softdevice's scheduler priority here.

    The softdevice scheduler is dynamic, meaning that if it detects a collision ahead, it needs to choose whether to maintain the connection (the connection interval events), or to scan. It can only schedule full scan windows, so if you have a scan window of 50ms, it needs to either schedule the entire scan window, or skip it completely. It will not schedule small scan windows in between the connection events.

    Looking at packet 9444, I think that since this is a fairly long break (61ms), needed to scan. So I guess you have the SCAN_WINDOW = 50ms, and the connection intevral = 7.5ms, so 61 is one scan window, and then syncing up to the next connection interval.

    Now, regarding packets per connection interval and latency.

    The latency is always limited to the connection interval, as you can't queue more packets after the connection event has started. At this point, it will push all the packets possible through. In fact, if you queue a packet too close to the connection event, it may not even be scheduled for that event, but the next one (the CPU needs some time to process the data to make it ready for being sent). When the event has started, the CPU doesn't have time to look at the received data, or queue new data until they are both done sending. It is also at this point your application will be noticed about incoming messages. So I don't think stribing for "most packets per connection event" is the way to go. I suggest you look into increasing the MTU. 

    As for connection interval, MTU size, and so on, I suggest you check out the ble_app_att_mtu_throughput example. It is not a very good example to use as a template, but it is good to experiment with connection parameters. You can read about it here.

    jshen said:
    By the way, is 9 packets per connection interval expected? or it is out of spec?

     No, it is not, but I am not sure whether this is the path you want to go down. Try looking into higher MTU and connection event lengths.

    BR,
    Edvin

Related