This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

BLE Transmission Latency

Hello, 

I have two nRF52832 (BL652) modules setup and connected through an RF cable. I am feeding the peripheral radio (2x) 174 byte packets every 5 ms. On the central, I am monitoring the receiving of these packets using the RTC clock and tracking the elapsed time between receiving these two packets from the peripheral. I have noticed that often times my latency climbs up to 15 ms, but most of the time it is around 5 ms. Is there a reason for this? I would expect to see some latency due to re-transmission, but not if I am connected through a RF cable.

I am using HVX notification transmissions from the peripheral, and occasionally sending packets from the central using a WRITE_CMD. My connection interval is set to be 7.5 ms, I have my CONN_SUP_TIMEOUT set to 1 second, and my SLAVE_LATENCY set to 0.

I modified the NUS example to where the main routine calls a new HVX transmission if the previous one ended with NRF_ERROR_RESOURCES. Should I move that to an APP_TIMER to give higher priority?

I confused as to why the latency would drift that far. Do I need to have all of the HVX packets loaded with the outgoing data before the beginning of the connection interval? Am I not able to load  the HVX buffers with new data after the connection interval is underway? I was hoping that every other CI would have 4 packets of 174 bytes (10ms), and therefore maintain the rate. Should I increase my CI to 15 ms?

As an odd side note, it seemed like I needed to increase NRF_SDH_BLE_GAP_EVENT_LENGTH to a large value (300) as opposed to using 6 like the CI would require. That is the only way to get better latency, otherwise it was around ~30ms.

I would really appreciate some guidance.

Thanks,

Chris

Parents
  • "I modified the NUS example to where the main routine calls a new HVX transmission if the previous one ended with NRF_ERROR_RESOURCES. Should I move that to an APP_TIMER to give higher priority?"
    "Do I need to have all of the HVX packets loaded with the outgoing data before the beginning of the connection interval?"
    "Am I not able to load the HVX buffers with new data after the connection interval is underway?"

     - You need to load the HVX packets before the CI, otherwise, the SoftDevice will probably not schedule them during the ongoing connection interval. The SoftDevice team is away on vacation until the start of August, so we must wait for some more guidance.

    "I was hoping that every other CI would have 4 packets of 174 bytes (10ms), and therefore maintain the rate. Should I increase my CI to 15 ms?" 
    - That might help, maybe 10ms will work as well.

    "As an odd side note, it seemed like I needed to increase NRF_SDH_BLE_GAP_EVENT_LENGTH to a large value (300) as opposed to using 6 like the CI would require. That is the only way to get better latency, otherwise it was around ~30ms."
     - Do you have a sniffer trace of the initial connection establishment? I'd like to see what parameters are negotiated for connection interval, MTU size, DLE, and event length extensions. 

    FYI there's an update coming to the BLE spec that will allow for ~1ms connection intervals for latency critical use-cases. I don't know when though, I'm guessing by next year. 

Reply
  • "I modified the NUS example to where the main routine calls a new HVX transmission if the previous one ended with NRF_ERROR_RESOURCES. Should I move that to an APP_TIMER to give higher priority?"
    "Do I need to have all of the HVX packets loaded with the outgoing data before the beginning of the connection interval?"
    "Am I not able to load the HVX buffers with new data after the connection interval is underway?"

     - You need to load the HVX packets before the CI, otherwise, the SoftDevice will probably not schedule them during the ongoing connection interval. The SoftDevice team is away on vacation until the start of August, so we must wait for some more guidance.

    "I was hoping that every other CI would have 4 packets of 174 bytes (10ms), and therefore maintain the rate. Should I increase my CI to 15 ms?" 
    - That might help, maybe 10ms will work as well.

    "As an odd side note, it seemed like I needed to increase NRF_SDH_BLE_GAP_EVENT_LENGTH to a large value (300) as opposed to using 6 like the CI would require. That is the only way to get better latency, otherwise it was around ~30ms."
     - Do you have a sniffer trace of the initial connection establishment? I'd like to see what parameters are negotiated for connection interval, MTU size, DLE, and event length extensions. 

    FYI there's an update coming to the BLE spec that will allow for ~1ms connection intervals for latency critical use-cases. I don't know when though, I'm guessing by next year. 

Children
  • I tried changing the CI to 10ms and 15ms, but it made the issue worse. I even tried to buffer up 6 packets before calling the sd_ble_gatts_hvx, but that seemed to make matters worse also.

    I finally increased my tx_queue to about 20, and that seemed to stabilize my peripheral side of things. My transmissions coming from my peripheral are on average, right around 5ms. That side looks correct. I believe this also tells me that I am receiving my lower level ACKs from the central on each transmission.

    On the central, it seems like my interrupt (BLE_NUS_C_EVT_NUS_TX_EVT) is coming in very sporadic. I am seeing that same 15 ms latency spikes every once in a while. I don't understand why this would be the case if my peripheral is working properly.

    The connection parameters are correct after negotiation as I described above.

  • Hi Kellac,

    A sniffer trace would really help understanding the communication. The sniffer only requires a nRF52 DK. 

    Could you explain how you send you data ? What exactly is your tx_queue ? Our suggestion is to always queue (calling sd_ble_gatts_hvx() ) as much as possible until you receive NO_MEM error. Then try again after you have BLE_GATTS_EVT_HVN_TX_COMPLETE event. 

    It's not possible to have an actual latency of 5ms when your minimum latency is 7.5ms. But you can send more than 1 package in one connection interval. 

    I would suggest to have a look in the ble_app_att_mtu_throughput example for high through put application. 

  • I have attempted to use the sniffer with the nRF52 DK, but it does not seem to capture my data rate. I am running a bonded link, and it seems to miss a lot of the packets. It didn't seem to tell me much.

    My tx_queue is set to 20. The way that I am currently transmitting is to send packets continuously until I get the NRF_ERROR_RESOURCES from sd_ble_gatts_hvx(). Once this happens I return, and then try again in about 100us (running on a timer). I continue this process forever. Just before I call sd_ble_gatts_hvx() I check to see if a nrf_queue has packets waiting to go out. If the nrf_queue is empty, then I return and try again in 100us. 

    A separate SPI interface is dropping packets into the nrf_queue at a rate of 2 packets / 5 ms (this is very consistent).

    Some questions:

    1) Do retransmissions continue to the next connection interval?

    2) I am operating at about 6 meters, is that large of latency typical? I am seeing occasional 30 ms latencies.

    Thanks,

  • I don't think retrying every 100us would needed. You should retry when you have BLE_GATTS_EVT_HVN_TX_COMPLETE event. 

    1) The retranmission is performed on next connection interval, correct. Note that having a long packet has a benefit of higher throughput but also has the higher risk of being corrupted due to interference (could be not your case as you have a cable). And it take longer time to do retranmission. If you have the requirement of high througput, it's better to send multiple of small packets. 

    2) You mean the RF cable is 6 meters long ? I don't have much experience with RF cable and testing but I think it's a little too long. I don't think 6m can add any latency, we are talking about speed of light here. 

  • I was waiting on BLE_GATTS_EVT_HVN_TX_COMPLETE, but what I discovered was that sometimes I would get that interrupt and not have any data ready to send. I would then get in a state where I could call sd_ble_gatts_hvx() twice. The timing was a little difficult to work out, but with checking every 100us it seems to work well.

    My apologies, the 6 meters was using antennas. My initial question was why I was having latencies over an RF cable. I had latencies of around 15 ms on an RF cable, but I think I have since discovered that they are due to loading data every 5 ms, and using a connection interval of 7.5ms. Eventually, I would have a connection interval with no data ready for transmission.

    You mentioned that it would be better for me to lower the size of the payloads and that might help with latency? I am trying to transmit (2) 174 byte packets every 7.5ms. Should I break that up? Does that improve the latency? What would you recommend for optimal transmission size?

    So if the packet is interfered with, then the retransmission does not happen until the next connection interval? Does that also mean that all other packets behind it are halted until the next connection interval?

Related