This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

"Time-boxing"/limiting active portion of BLE connection interval?

Assume a BLE connection has been established between a master and slave.  The device in question could be either the master or the slave (if it matters).

I have an application with tight timing requirements that cannot allow the active/transmitting portion (including T_IFS periods) of a BLE connection interval to bleed into other periodic processing and radio usage.  That is, I need to be able to reliably shut off all BLE activity/resource (e.g. RADIO) except for a small "duty cycle" at the beginning of the BLE connection interval.  I guess I am trying to design the application to be more deterministic and round-robin, not so much RTOS/pre-emptive.

As I currently understand, the BLE stack (which is opaque in the case of Nordic SoftDevice and iOS) is required to perform retransmits in the case of lost packets.  As far as I am aware, the BLE spec does not make any requirements on how many retransmit attempts are required within a connection interval.  On the other hand, I am not aware of any adjustable parameters in SoftDevice or smartphone BLE implementations that allow for forcefully ending the connection interval or specifying the maximum number of frames/packets per connection interval from the application level.  Is this possible?  (Here is a similar question: https://devzone.nordicsemi.com/f/nordic-q-a/2541/number-of-ble-packets-per-connection-interval)

I am also aware of the timeslot API, but it's not clear if the timeslot is reliably "given", or if the BLE could take precedence in the case of a large number of re-transmits.  Can the timeslot API provide a guaranteed periodic window (of <100ms) without any BLE processor/radio usage?  If so, does that require the radio to shut off even earlier than the timeslot period to perform necessary "post-processing", or would I have to empirically measure the "on time" of the SoftDevice and make the timeslot duration reasonably accommodating based on that?

I have read that SoftDevice or the smartphone stack have built-in limits on the number of packets/frames that can be sent/received per connection event due to buffer size (e.g. "6" in the case of recent SoftDevice libraries).  But does that limit re-transmits?  In any case, I don't feel that hoping my current version of SoftDevice or the current smartphone OS version limits the used connection interval portion to an acceptable amount is a strong enough guarantee.

----

Edit: So I went through a SoftDevice spec (S132_SDS_v6.0), and saw the following in section 10.4: "By default, connections are set to have an event length of 3.75 ms".  Therefore, I'm hoping all SoftDevice's have a similar "hard limit" on event length time even in the case of no successful transmits (assuming event length extension disabled)?

I also saw in Table 30 that all SoftDevice BLE connection activity occurs at the same or higher priority than timeslot API events.  Therefore, I must "trust" that the SoftDevice will hand over control after it's event length period (+ post-processing time?).  Section 15.9 says the timeslot API timeslots could be taken over by SoftDevice if necessary, so that is what I'm concerned of given its higher priority.

Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.  If this is not required (i.e. there is a way to still regularly space out timeslot API events after possibly drifting peripheral/slave connection events), that would be good to know.

Thanks

  • Hello,

     

    As I currently understand, the BLE stack (which is opaque in the case of Nordic SoftDevice and iOS) is required to perform retransmits in the case of lost packets.  As far as I am aware, the BLE spec does not make any requirements on how many retransmit attempts are required within a connection interval.

     Yes, it is required to perform retransmits. In fact, it is not possible (by the BLE specification) to set a limit on how many retransmits you have. If you have sent a packet, and it is not ACKed, it will be retransmitted until it is ACKed, or "die" trying. The only way to dismiss a packet is if the connection is lost. 

     

    I am also aware of the timeslot API, but it's not clear if the timeslot is reliably "given"

     You can use the Timeslot API to request timeslots, but it is the softdevice that decides whether or not you will be granted those timeslots or not. It is always the softdevice that has first priority, and it will hand out timeslots when if the softdevice is not using the radio itself. So if you are granted a timeslot (<100ms) you will get it, because the softdevice didn't plan to use that timeslot, but there is no guarantee that you will get a certain timeslot on a specific time, if the softdevice plans to use the radio at that point in time. 

     

    I have read that SoftDevice or the smartphone stack have built-in limits on the number of packets/frames that can be sent/received per connection event due to buffer size (e.g. "6" in the case of recent SoftDevice libraries).  But does that limit re-transmits?

     It is correct that there is a maximum amount of data that can be transmitted on each connection interval. However, the amount of data is less when you have a lot of retransmissions. In fact, the softdevice works a bit like this:

    -The application decides to send a packet over BLE. It queues this packet using some softdevice calls (typically sd_ble_gatts_hvx() if you use notifications). 

    -The application may want to send more data, even though the previous data isn't actually transferred yet. It can do so using sd_ble_gatts_hvx(). This call will return 0 (NRF_SUCCESS) if there is room in the softdevice queue for the packets. If this call returned NRF_SUCCESS, it will be added to the queue of the packets that the softdevice will send. And it will try to send them on each connection interval, until they are ACKed, or "die" trying. 

    I don't think these are the answers you wanted to hear, based on your question, but it is the way that our softdevice works. 

    Best regards,

    Edvin

  • Thanks for the response, Edvin.

    Just to clarify, I don't mind the BLE retransmitting in general (as I understand this is required by BLE), but I would like to limit the amount of time allowed to do any transmits/retransmits per connection interval.

    Regarding the edit in the OP, could you verify whether the event length will stay within the "event length" setting (e.g. default of 3.75ms for modern SD versions) with event length extension disabled, or whether continually failing transmits (i.e. retransmits required) would potentially increase the total event length?  I'm hoping that if I configure the device to run in master mode (i.e. no local anchor point drift), and request periodic timeslots for all times except for the connection events + some buffer time for BLE/SD post-processing, I will never be preempted by the SD.

    ----

    As an extreme example, I assume if I run application code at priority level 0 (contrary to what the SDS requires) at times when the SD "should be" inactive (e.g. based on event length time such as 3.75ms and periodic timeslot request), then the application code would nullify the BT certification of the SD (or the SD might crash)?

    The other alternative I'm considering (though would strongly prefer not to do), would be to have the BLE stack running on a separate chip with a separate antenna, though I'm not sure what sort of RF interference issues might arise with two independent antennas both transmitting in the ISM band.

  • Just to answer some of the other questions I had:

    "Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.  If this is not required (i.e. there is a way to still regularly space out timeslot API events after possibly drifting peripheral/slave connection events), that would be good to know."

    You can use the ACTIVE/nACTIVE ratio notifications available at least on SD 7.0.1 to account for master drift at the slave.  However, note that the master may still dictate the connection interval to be undesirable values if you do not have control/visibility into the master stack components (as is the case with iOS/Android), so to be safe you should configure your device in master mode.

    "If so, does that require the radio to shut off even earlier than the timeslot period to perform necessary "post-processing", or would I have to empirically measure the "on time" of the SoftDevice and make the timeslot duration reasonably accommodating based on that?"

    The event length includes "processing overhead" beyond radio event ("radio on") time.  See table 22 of SDS 7.1.  According to Figure 11 of SDS 7.1, the processing overhead of "t_event" also includes t_prep (maximum of 1542us).  Therefore, if the event length value is respected, all you would need to do is sync your application code/radio usage starting at event length after an ACTIVE radio notification, where t_ndist = t_prep,max = 1542us.

  • abc said:
    Regarding the edit in the OP, could you verify whether the event length will stay within the "event length" setting

     Yes. It will stay in the event length setting that the two devices (master and slave) have agreed upon. If one device "doesn't support" DLE then the other device will not dictate it. So no, it will not potentially increase the event lengt. 

     

    abc said:
    As an extreme example, I assume if I run application code at priority level 0 (contrary to what the SDS requires) at times when the SD "should be" inactive (e.g. based on event length time such as 3.75ms and periodic timeslot request), then the application code would nullify the BT certification of the SD (or the SD might crash)?

     I understand what you mean here, but don't do that. If for some reason the softdevice requires this timeslot, maybe because of drift in the connected device's clock or something, then the softdevice will assert, and the application will reset. That is probably not the desired case in any situation. 

    You can rather use the timeslot API to request the timeslots, and you can use radio notifications to be notified whenever the radio is going to be used to time the slots in between the SD events. 

     

    abc said:
    "Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.

     Yes. That is correct. The master is the one that control the timeslots, and initiates the first TX each connection interval. However, there may be some drifts overall. But when you start the application you are not connected, you will start scanning, and connect to an advertising device. When that connection is established, the connection parameters are set, and the clock will start running. Whether that happens 1 second after the application starts, or 1.02 seconds after the application starts, is not up to the master to control. It is based on the advertisements of the peripheral.

    Are you sure you need to keep these strict time requirements in your application? what is it that you need to do that is so time sensitive? Have you looked into other ways of doing these time strict operations? If it is a sensor you need to read every Xms, perhaps you can set up a timer and use PPI to trigger that read?

  • I need about 6-7ms of every 10ms to read data over the air (non-BLE), process the data, and update control of an actuator.  If I occasionally miss one of those updates every 1s or so, it's not the end of the world, but if it regularly happens more frequently than that (e.g. every second or third actuation update missed due to the BLE stack not respecting the specified event length time), the user may be very unhappy.

    Therefore, it's preferable to have a hard guarantee of what the SoftDevice CPU/radio utilization will be so I can plan around it.

    I will proceed with the assumption that if I tell SoftDevice configured in master mode to use an event length (called "GAP event length"?) of x, and disable event length extension, then all SD setup, radio usage, and post-processing will occur within a periodic duration of 'x' (relative to the local oscillator).

    Thanks for your help

Related