This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

"Time-boxing"/limiting active portion of BLE connection interval?

Assume a BLE connection has been established between a master and slave.  The device in question could be either the master or the slave (if it matters).

I have an application with tight timing requirements that cannot allow the active/transmitting portion (including T_IFS periods) of a BLE connection interval to bleed into other periodic processing and radio usage.  That is, I need to be able to reliably shut off all BLE activity/resource (e.g. RADIO) except for a small "duty cycle" at the beginning of the BLE connection interval.  I guess I am trying to design the application to be more deterministic and round-robin, not so much RTOS/pre-emptive.

As I currently understand, the BLE stack (which is opaque in the case of Nordic SoftDevice and iOS) is required to perform retransmits in the case of lost packets.  As far as I am aware, the BLE spec does not make any requirements on how many retransmit attempts are required within a connection interval.  On the other hand, I am not aware of any adjustable parameters in SoftDevice or smartphone BLE implementations that allow for forcefully ending the connection interval or specifying the maximum number of frames/packets per connection interval from the application level.  Is this possible?  (Here is a similar question: https://devzone.nordicsemi.com/f/nordic-q-a/2541/number-of-ble-packets-per-connection-interval)

I am also aware of the timeslot API, but it's not clear if the timeslot is reliably "given", or if the BLE could take precedence in the case of a large number of re-transmits.  Can the timeslot API provide a guaranteed periodic window (of <100ms) without any BLE processor/radio usage?  If so, does that require the radio to shut off even earlier than the timeslot period to perform necessary "post-processing", or would I have to empirically measure the "on time" of the SoftDevice and make the timeslot duration reasonably accommodating based on that?

I have read that SoftDevice or the smartphone stack have built-in limits on the number of packets/frames that can be sent/received per connection event due to buffer size (e.g. "6" in the case of recent SoftDevice libraries).  But does that limit re-transmits?  In any case, I don't feel that hoping my current version of SoftDevice or the current smartphone OS version limits the used connection interval portion to an acceptable amount is a strong enough guarantee.

----

Edit: So I went through a SoftDevice spec (S132_SDS_v6.0), and saw the following in section 10.4: "By default, connections are set to have an event length of 3.75 ms".  Therefore, I'm hoping all SoftDevice's have a similar "hard limit" on event length time even in the case of no successful transmits (assuming event length extension disabled)?

I also saw in Table 30 that all SoftDevice BLE connection activity occurs at the same or higher priority than timeslot API events.  Therefore, I must "trust" that the SoftDevice will hand over control after it's event length period (+ post-processing time?).  Section 15.9 says the timeslot API timeslots could be taken over by SoftDevice if necessary, so that is what I'm concerned of given its higher priority.

Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.  If this is not required (i.e. there is a way to still regularly space out timeslot API events after possibly drifting peripheral/slave connection events), that would be good to know.

Thanks

Parents
  • Just to answer some of the other questions I had:

    "Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.  If this is not required (i.e. there is a way to still regularly space out timeslot API events after possibly drifting peripheral/slave connection events), that would be good to know."

    You can use the ACTIVE/nACTIVE ratio notifications available at least on SD 7.0.1 to account for master drift at the slave.  However, note that the master may still dictate the connection interval to be undesirable values if you do not have control/visibility into the master stack components (as is the case with iOS/Android), so to be safe you should configure your device in master mode.

    "If so, does that require the radio to shut off even earlier than the timeslot period to perform necessary "post-processing", or would I have to empirically measure the "on time" of the SoftDevice and make the timeslot duration reasonably accommodating based on that?"

    The event length includes "processing overhead" beyond radio event ("radio on") time.  See table 22 of SDS 7.1.  According to Figure 11 of SDS 7.1, the processing overhead of "t_event" also includes t_prep (maximum of 1542us).  Therefore, if the event length value is respected, all you would need to do is sync your application code/radio usage starting at event length after an ACTIVE radio notification, where t_ndist = t_prep,max = 1542us.

Reply
  • Just to answer some of the other questions I had:

    "Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.  If this is not required (i.e. there is a way to still regularly space out timeslot API events after possibly drifting peripheral/slave connection events), that would be good to know."

    You can use the ACTIVE/nACTIVE ratio notifications available at least on SD 7.0.1 to account for master drift at the slave.  However, note that the master may still dictate the connection interval to be undesirable values if you do not have control/visibility into the master stack components (as is the case with iOS/Android), so to be safe you should configure your device in master mode.

    "If so, does that require the radio to shut off even earlier than the timeslot period to perform necessary "post-processing", or would I have to empirically measure the "on time" of the SoftDevice and make the timeslot duration reasonably accommodating based on that?"

    The event length includes "processing overhead" beyond radio event ("radio on") time.  See table 22 of SDS 7.1.  According to Figure 11 of SDS 7.1, the processing overhead of "t_event" also includes t_prep (maximum of 1542us).  Therefore, if the event length value is respected, all you would need to do is sync your application code/radio usage starting at event length after an ACTIVE radio notification, where t_ndist = t_prep,max = 1542us.

Children
  • abc said:
    Regarding the edit in the OP, could you verify whether the event length will stay within the "event length" setting

     Yes. It will stay in the event length setting that the two devices (master and slave) have agreed upon. If one device "doesn't support" DLE then the other device will not dictate it. So no, it will not potentially increase the event lengt. 

     

    abc said:
    As an extreme example, I assume if I run application code at priority level 0 (contrary to what the SDS requires) at times when the SD "should be" inactive (e.g. based on event length time such as 3.75ms and periodic timeslot request), then the application code would nullify the BT certification of the SD (or the SD might crash)?

     I understand what you mean here, but don't do that. If for some reason the softdevice requires this timeslot, maybe because of drift in the connected device's clock or something, then the softdevice will assert, and the application will reset. That is probably not the desired case in any situation. 

    You can rather use the timeslot API to request the timeslots, and you can use radio notifications to be notified whenever the radio is going to be used to time the slots in between the SD events. 

     

    abc said:
    "Also, I'm assuming that to avoid peer clock drift shifting the BLE connection event in time, the device running the application in question should be configured in central/master mode.

     Yes. That is correct. The master is the one that control the timeslots, and initiates the first TX each connection interval. However, there may be some drifts overall. But when you start the application you are not connected, you will start scanning, and connect to an advertising device. When that connection is established, the connection parameters are set, and the clock will start running. Whether that happens 1 second after the application starts, or 1.02 seconds after the application starts, is not up to the master to control. It is based on the advertisements of the peripheral.

    Are you sure you need to keep these strict time requirements in your application? what is it that you need to do that is so time sensitive? Have you looked into other ways of doing these time strict operations? If it is a sensor you need to read every Xms, perhaps you can set up a timer and use PPI to trigger that read?

  • I need about 6-7ms of every 10ms to read data over the air (non-BLE), process the data, and update control of an actuator.  If I occasionally miss one of those updates every 1s or so, it's not the end of the world, but if it regularly happens more frequently than that (e.g. every second or third actuation update missed due to the BLE stack not respecting the specified event length time), the user may be very unhappy.

    Therefore, it's preferable to have a hard guarantee of what the SoftDevice CPU/radio utilization will be so I can plan around it.

    I will proceed with the assumption that if I tell SoftDevice configured in master mode to use an event length (called "GAP event length"?) of x, and disable event length extension, then all SD setup, radio usage, and post-processing will occur within a periodic duration of 'x' (relative to the local oscillator).

    Thanks for your help

  • Also, as far as I'm aware, there is no negotiation between the master and slave regarding event length (unless you meant connection interval?).  It seems that the master gets discretion on whether to continue transaction if either side has MD bit of link layer set to 1.  Therefore, I'm hoping that the SD implementation does something to the effect that it starts a timer at the beginning of t_prep, and then when deciding whether to continue the connection event after a first packet exchange (regardless of whether either packet went through successfully), only does so if it can fit a second packet exchange and any necessary post-processing in the event length time remaining, taking into account the worst-case slave packet size.

    • Also, just looking through the BT 5.0 spec again, I see in Vol 6 part B section 4.5.10 there are some bookkeeping values that could be used to perform the calculations above (e.g. connEffectiveMaxTxTime and connEffectiveMaxRxTime), though these do not seem to be "negotiated" but rather based on values stated by each peer.
Related