This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

android Android-nRF-UART app connection interval/bandwidth configuration

Hi,

I hope is not a double post. I am doing a throughput evaluation with my custom board and an app that is based on the Nordic's Android UART app through NUS service and wanted to make sure that this app is capable of handling the requirements for a fast data transfer: connection interval, bandwidth connection event extension. Please let me know if there are other parameters that matters.

Vala

Edit: I guess the bandwidth, which defines the number of packets per connection interval, is basically defined and restricted by the Android OS. Am I right?

  • Hi Vala,

    On nRF5 side, beside connection interval, bandwidth connection event extension, how quick you send your data is also important to improve bandwidth. You should send as many packet as possible on a single connection event. Usually the approach is to queue until the buffer is full and queue again when there is packet sent (TX_COMPLETE event). It's better to send a packet with 20 bytes payload than sending several small payload packets.

    On the Android side, you can also try to use .requestConnectionPriority() to request high priority for the connection. The "number of packets per connection event" is limited by the hardware on the device, what is important is to utilize the maximum the hardware can handle.

  • Hi Hung,

    Thanks for your answer. .requestConnectionPriority() is used to update the connection parameters, right? I forced the peripheral to send a connection parameter request at the start of the connection. So the connection is in its upper limits (interval, latency). Will there be any other improvement by .requestConnectionPriority()?

    I was able to send reasonable amount of data with almost no packet loss. I reached 330 packets per second with an Android device (Acer) as the peer. From sniffer, I observed that there are 3 packets transmitted in most of the connection intervals. With higher speeds, I even noticed that there are 4 packets sent per connection interval. I had to use a ring buffer to prevent dropping packets when the next packet comes and the previous is not sent (or put in the TX buffer) yet.

  • One question, is bandwidth in this context a SIG defined term? Or it is related to the SoftDevice definitions? I want to know if for example in Android devices (or other non-Nordic devices) there is a therm called bandwidth, which means "the allowed number of packets per connection interval"?

  • Hi Vala,

    Correct, requestConnectionPriority() is used to update connection parameters, connection interval in particular.

    You can force the peripheral to send connection parameter request at the start of the connection but accepting that request or not is up to the central. And different phones will behave differently on this. So requestConnectionPriority() should be called, just to ensure the phone will give more priority for the connection.

    Yes, you should use buffer, and also you should implement a packet notification the same way that we implemented on our DFU OTA protocol. So that the central after sending a number of packets say 20 packets, will stop and wait for a notification from the peripheral telling it's ready to continue to receive.

    I assume "bandwidth" you mentioned here is related to the "mid" "high" "low" bandwidth we set with the opt API ? If it is then it's the Nordic's term only related to configuration in the softdevice.

    I don't think there is an equivalent "bandwidth" on Android, except for the "priority" in the requestConnectionPriority(). But I am not 100% sure if the requestConnectionPriority() only to set the connection interval or it also does something with the number of packets per connection event.

  • That was very useful information Hung, thanks. And yes I meant the bandwidth as "low", "mid" and "high". A question about this packet notification. Actually my scenario is the other way around. The peripheral sends the information and Central receives (so I guess the peripheral is the server and central a client). What benefit this notification can bring to my system? I mean can there be any situation in which the Central is not ready to receive any more but the connection is still alive? If no, then I can detect this situation by only detection the disconnection and not compromising the packet rate with a wait-for-notification from central to peripheral. Although, I know that in all the protocols some kind of application layer's hand shaking brings reliability to the system. But here, I guess, the cost is lowering the packet rate. Am I right?

Related