our device has current limitations in the range of 1mA. We were examining several ways to limit the actual connection parameters to keep current consumption low. But one will find always central devices which do not do it the "expected" way.
So the current idea is to catch current peaks thru omitting connection intervals thru slave latency.
Unfortunately the communication is bidirectional, so the mice example does not really fit.
I'm wondering now, if the following setup is legal, because documentation is not very clear about this and tests showed that it might work:
"allow communication" is done by "sd_ble_opt_set( BLE_GAP_OPT_LOCAL_CONN_LATENCY, latency=0 )",
"delay communication" is done by "sd_ble_opt_set( BLE_GAP_OPT_LOCAL_CONN_LATENCY, latency=3 )".
Comments / suggestions are highly welcome!
If you are able to work around this by using the local connection latency feature until the connection parameter update is finished you can do that. But be aware that there is a risk you might loose the connection if you use the local connection latency feature. but if you run out of power I guess you are in the same situation...
Unfortunately the local slave latency does not seem to work... is there another way to make the softdevice ignore connection intervals?
EDIT: I'm sorry, there is no other way to force the softdevice to ignore connection events. What do you mean when you say it does not work? what is returned in the sd call? and are you following the guidelines in the documentation?
What do you mean with "not unless you use the standard slave latency"?
Standard slave latency is "0", and the sd call returns NRF_SUCCESS and the p_actual_latency shows the wanted value.
But unfortunately I cannot see any effect, i.e. radio on/off events are still in the interval period.
Can you also confirm your application does not try to send any data at the time it is supposed to ignore the connection event?
The picture below shows connection establishment done with LightBlue Explorer. With our application the graph is quite similar except that the actual communication starts on the right side.
But as already said, our problem is connection establishment. Android (>=7) switches to high speed during GATT queries and there is nothing we can do about it, how it sets the connection interval and how the softdevice is responding.
For the record: for the high speed interval in the center and the right part, local slave latency was set to 4.
Image shows module current measured on an in series 200R. There are some capacitors on the module, so one does not see the actual current consumption but the recharging current.
Another note: this connection is successful but the 1mA limit is close. And there is really no margin for CPU activity.
My idea would be to solve the situation thru a force local slave latency, which skips some connection events even if there are pending packets in the queue.
Any suggestions are welcome.
PS: peak on the left shows BLE_CONNECT, in the center of the image connection setup is (6,6,0,2000), on the right side it is at ~(32,32,0,2000)
I understand. Seems like the local slave latency is overridden during the service discovery procedure, so it will not help your application. Unfortunately there are no option to enforce local slave latency. I think the last possible solution to this would be to use the timeslot api in an effort to block ble communication. Could you try setting up a timeslot event with high priority and see if this helps to limit ble traffic?
See sd_radio_request , sd_radio_session_open,