Connection Event Length problem with coded phy

I am currently working on a project involving the Nordic nRF52840-DK development boards (1 Central and 11 Peripherals communication),  with LE PHY Coded S8. In this project, I have encountered challenges related to the connection interval and packet transmission, and I am seeking your guidance to resolve these issues.

 

  1. Connection Event:
    • In some occurrences the observed packet duration is over 12 ms.
    • The current connection event is set to 10 ms. CONFIG_BT_CTLR_SDC_MAX_CONN_EVENT_LEN_DEFAULT=10000
      Connection Interval is 60ms.
    • Based on calculations, if the packet duration is 12 ms and all eleven Peripherals are exchanging packets with the Central, the total duration would be 132 ms, which exceeds the interval for a single connection event.

In this regard, I request your assistance to clarify the following aspects: a. What factors could contribute to the observed packets duration exceeding the desired connection event? Which Zephyr configuration options should I modify to achieve optimal connection event length in this specific use case?

 

        2. Optimizing Packet Transmission:

  • I have already reviewed the payload size and data rate in my code to optimize packet transmission efficiency.
  • However, I would appreciate your guidance on the following aspects related to Zephyr and the Nordic nRF52840-DK boards: a. Are there any specific BLE parameters or Zephyr configuration options that can be adjusted to optimize packet transmission further? b. What are the recommended settings and values for these parameters to achieve efficient packet transmission in this context? 

To provide more context, here are the relevant details of my configuration and environment:

  • Nordic nRF52840-DK development boards with LE PHY Coded S=8.
  • Central Specific Config:
    • CONFIG_BT_MAX_CONN=12
    • CONFIG_BT_CTLR_SDC_MAX_CONN_EVENT_LEN_DEFAULT=10000
    • CONFIG_BT_L2CAP_TX_MTU=255
    • CONFIG_BT_BUF_ACL_TX_SIZE=83
    • CONFIG_BT_BUF_ACL_RX_SIZE=83
    • CONFIG_BT_CTLR_DATA_LENGTH_MAX=39
    • CONFIG_BT_CONN_TX_MAX=100
  • Hi Simon

    As you can see I can't get any meaningful answer from your colleague.

    I suppose people at Nordic ran the throughput demo many times and information about expected throughput should be easily available. But maybe I'm wrong.

  • Hi again

    I'm sorry about the delay on this ticket. I have not been able to take a proper look into this yet. I'll try to set up this test on my end this week and see if I can reproduce it. The main difference between the 1Mbps PHY and Coded PHY is that you're restricted to the 27 byte MTU packets instead of 247 MTU.

    The throughput tests has been tested on our side and we've seen similar throughput to what the Novelbits blog post reports, but I'll try it on my end to make sure that's still the case on NCS 3.1.1.

    Best regards,

    Simon

  • Hi

    I have tested and confirmed this behavior (777kbps ~77% on 1Mbps and 55kbps ~44% on Conded PHY) on the throughput sample yesterday, and it took discussing the results I saw with a colleague to realize why the Coded PHY performs noticably "worse" than the 1Mbps PHY. 

    It's rooted in how Coded PHY is built up. Essentially, every packet of data is sent 8 times, with all header and preamble data also being transmitted 8 times, which affects the total throughput. The throughput was never the goal with Coded PHY and it is made for range and stability over throughput performance.

    The math including all overhead etc. would be something like:

    • 1 M PHY after headers: 1 920 bits / 2.468 ms = 0.777 Mb/s (~78 % efficiency). 
    • S=8 PHY after headers: 1 920 bits / 17.644 ms = 108.8 kb/s (~87 % efficiency). But each coded packet already occupies 17.6 ms on air; with typical connection intervals (e.g. 30 ms) you can only fit one coded packet per event, so the rest of the interval is idle. That reduces user throughput to 1 920 bits / 30 ms = 64 kb/s (~51 % efficiency).

    Best regards,

    Simon

  • Hi

    Thank you very much for testing and analysis.

    I knew about repetition, but it wasn't whole answer. Yes, the problem is in big part of connection event being idle. However I don't know how 17.6 ms was calculated. In my opinion it should be 2.3 ms for a packet, 0.6 ms for ACK and 2x0.3ms for inter frame space. My calculator app is here. Anyway most important thing seems to be Link Layer PDU limited by Nordic to 27 bytes. This makes it not possible to use connection event more efficiently. I know it is done intentionally and it's not a bug. It was just hard to find. For un-Coded PHY there is no such limitation and much better efficiency can be achieved.

    The efficiency for S=2 is worse than for S=8 becaue header is repeated 8 times just like for S=2. This way S=2 is not 4 times faster than S=2 mode.

  • Hi

    Thank you for your thorough feedback/conclusion Grzegorz. I'll take this up internally to see if we can make this clearer in our documentation somewhere.

    Best regards,

    Simon

Related