This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Why a BLE read takes significantly longer than the Connection Interval

Hi All,

I have developed a device using the nRF51822, and the connection interval is set to ~1000ms.

When reading a characteristic from a Raspberry Pi, we are timing the duration of the read, and it seems to be taking around 2seconds give or take a few hundred milliseconds.

I would have expected it to take ~1000ms. Am I wrong in assuming this?

When I use the nRF Connect App, I cant time it as accurately, but it looks like it's taking about 2 seconds for the read to happen.

If multiple reads of different characteristics are attempted, then each one happens sequentially and 2 seconds apart, meaning reading it takes about 10 seconds to read 5 characteristics.

Additionally, writing to one of the characteristics takes ~5 seconds.

All of these characteristics are only transferring ~4 bytes of data.

When I set the characteristics to Notify, they seem to ll happily update at ~1 second intervals as expected.

The Slave Latency is set to 0.

If anyone can offer any explanation around why this is occurring, it would be greatly appreciated.

Cheers, -Steve

Parents
  • Hi Emil

    While it would be technically possible to prepare the response in time, the problem here is that the request has to go from the link layer in the controller up to the GATT module in the host and back again before the response can be sent.

    Bluetooth is designed to have the controller handle the time critical operations, while the host is not. We don't assume the communication between the controller and host to be very quick.

    In many cases controller to host communication is actually a physical interface (such as UART or USB), limiting the communication speed. In the SoftDevice the communication happens internally and is much quicker, but the host processing is run at a lower interrupt priority and could be delayed by application interrupts, which would have the same effect.

    For this reason it is typically only packets that are buffered before the connection event that will be sent on the same event.

    It is theoretically possible that the response will be sent in the same event if you already have 2 or more packets in the buffer, since it would give the host time to process the request before the last packet in the queue is reached, but this hasn't been tested.

    If lower latency is a requirement I would recommend using notifications, or reducing the connection interval.

    Best regards
    Torbjørn

  • Hi Emil

    I agree this would be a nice feature. Assuming you use bonding the service discovery doesn't happen very often, but reducing connection time is always good.

    One of the stack guys tried to do some optimizations to see if there is a straight forward way to achieve this, but couldn't find it unfortunately. Currently the various house keeping and processing tasks occurring after the packet is received means the host doesn't have time to prepare the packet in time.

    In other words we would have to make more radical changes to the scheduling of tasks in the link layer to support this, and that is not trivial. Making changes to the link layer timing could potentially have unintended side effects on the higher layers, which means it is a risky and potentially resource demanding thing to do.

    In other words I am not very positive that this is something we will try to improve, but we are obviously supportive if people like yourself want to make alternative stacks with different feature sets :)

    Best regards
    Torbjørn

Reply
  • Hi Emil

    I agree this would be a nice feature. Assuming you use bonding the service discovery doesn't happen very often, but reducing connection time is always good.

    One of the stack guys tried to do some optimizations to see if there is a straight forward way to achieve this, but couldn't find it unfortunately. Currently the various house keeping and processing tasks occurring after the packet is received means the host doesn't have time to prepare the packet in time.

    In other words we would have to make more radical changes to the scheduling of tasks in the link layer to support this, and that is not trivial. Making changes to the link layer timing could potentially have unintended side effects on the higher layers, which means it is a risky and potentially resource demanding thing to do.

    In other words I am not very positive that this is something we will try to improve, but we are obviously supportive if people like yourself want to make alternative stacks with different feature sets :)

    Best regards
    Torbjørn

Children
No Data
Related