I need to understand how a multi-link configuration can achieve real-time transmission - its all about latency.
In my nRF52840 application I have several body-worn Inertial Measurement Units (IMU's) - the Peripherals, each taking turn in a "Round Robin" manner to send a data sample in real-time to a PC Dongle - the Central. For this app the sample rate is 180sps so the sample period is therefore 5.56ms and all IMU's ideally transmit one 26 byte sample to the Dongle within this period - so Data Length Extension has been enabled to ensure that all 26 bytes fit into the ATT Payload of one frame. To ensure this 180sps sample rate and allow the full capacity of up to 20 IMU's (peripherals) I am using the 2Mbps PHY.
Because the minimum Connection Interval is 7.5ms the 5.56ms sample rate cannot consistently be met in real-time, however, the application can tolerate a few sample periods of latency - say 27.5ms (22 units of 1.25ms) and so I am setting the Connection Interval to 27.5ms and packing five 26 byte samples into each transmission - this results in an average sample period of 5.5ms but introduces latency - the most recently collected sample is sent by the IMU "on time", the one collected before this has a 5.5ms delay, ... and the 5th sample in the frame arrives at the Central 27.5ms "late" but that is ok for this app, but no more latency than that please!. Of course my MTU is set to allow one frame to accommodate all 5 samples in one single transmission.
The multi-link Application Question
If I have say 20 IMU's (Peripherals) transmitting one-by-one (#1 sends its frame, then #2, then #3..., then #20, repeat - #1, then #2, then #3, .... and so on) to the Dongle (Central),
From my understanding, this consideration of how multi-link works with Connection Intervals determines if real-time data transmission is feasible. If (1) applies then it is not, if (2) applies then it generally is. Can somebody please shed light on this question. Does Nordic Semi have any written documentation that can answer the question? Does the SDK support the use of multiple Peripherals transmitting within each Connection Interval?
Many thanks in advance !!!
From my understanding, there are two consepts that needs to be distinguised: connection interval and "connection event length"
Connection interval: how often one device can transmit data
"Connection event length": the time it takes for each device to transmit data (I'm not sure if "connection event length" is the correct name)
Depending on the connection interval used, there can be multiple "connection events" (for different device) within on connection interval.
To get a better understanding, I would recommend you to read the Scheduling chapter in the softdevice specification.
I read that section and it clarifies a lot, yes the event length is the key concept. But I still think that based upon what I read that the maximum number of connected Peripherals is very small.
To take a simple example. Imagine that my streaming sample interval was 5ms and my maximum sample latency tolerance was 4 intervals (20ms), then considering the minimum event length of 2.5ms I can only fit 8 peripheral devices into my multilink scenario. #1 transmits a 4 sample packet at time 0, then #2 does at 2.5ms, then #3 at 5.0ms, then...#8 at 17.5ms, then at 20ms #1 send its next 4-sample packet - the average sample rate is still one per 5ms and the maximum sample latency is 20ms (4 sample intervals). So if I cannot tolerate more latency then I can't fit more than 8 devices into the system - certainly nowhere near the 20 devices limit of the SoftDevice S140. Have I missed something???
Yes, your understanding is correct: to allow more devices connect to your central, you will either need to increase the latency or decrease the amount of data.
An option could be to use multiple centrals,multiple chips. Those centrals could then communicate with eachother using a serial interface.
Hi @neonotion , Do you consider using buffering for the sensor data? This will simplify the data transmission a lot and decrease the requirements towards real-time transmission. Finally, the sensor sampling rate and the communication protocol timing should not directly depend on each other. The only question in this case is whether the final consumer of the data can tolerate some ms delay in receiving the data - but for a wireless application this is often the case. As I am solving a similar matter as yours, although with only two peripheral devices, you may want to check the topic on my current problem here https://devzone.nordicsemi.com/f/nordic-q-a/35689/multilink-central---data-always-received-from-the-last-device
Hi kont40, yes buffering is already used. Because the transmitted samples get fused with real-time video frames at the receiver side the buffer cannot be too long (the frames can't wait too long for samples and must be fused ASAP), its the real time aspect of this project that means sample latency can only be tolerated up to a small number of sample intervals - 4 sample intervals as a guideline, so that is the maximum allowed buffer size. This is the reason for the dependency - its not the sample rate, its the latency issue.