LLPM connection interval and the frequency of data transmission

Hello,

I'm using the llpm sample code on nrf52840.

I'm confused about the connection interval.

From this post, I thought that the connection interval is equivalent to the shortest time interval for transmitting data, that is, after the first data packet is transmitted, the second data packet can be transmitted after waiting for the connection interval time.

Also, when I use non-llpm ble code, I can also change the minimum interval for transmitting data by modifying CONFIG_BT_PERIPHERAL_PREF_MIN_INT, and the description of CONFIG_BT_PERIPHERAL_PREF_MIN_INT is also related to the connection interval.

But I modified the following part in the llpm code to detect the time interval for receiving data, and found that when the connection interval is 1ms, the shortest interval for receiving data is still 7.5ms.

static void test_run(void)
{
    ...
    /* Start sending the timestamp to its peer */
	while (default_conn) {
	    ...
	    //k_sleep(K_MSEC(200)); /* Don't wait */
	    ...
	}
}
void latency_request(const void *buf, uint16_t len)
{
	uint32_t time = k_cycle_get_32();
	uint8_t value[len];
	memcpy(value,buf,len);
	printk("Recived Data time:%u ms\n",(k_cyc_to_ns_near32(time)/1000000));
}

int main(void)
{
	int err;
	...
	static const struct bt_latency_cb data_callbacks =
    {
        .latency_request = latency_request,
    };

	//err = bt_latency_init(&latency, NULL);
	err = bt_latency_init(&latency, &data_callbacks);
	if (err) {
		printk("Latency service initialization failed (err %d)\n", err);
		return 0;
	}
	...
}

Now I am confused about what the connection interval means? Is it related to the interval of transmitting data?

  • I use k_cycle_get_32 to directly obtain the time when data is received.

    void latency_request(const void *buf, uint16_t len)
    {
    	uint32_t time = k_cycle_get_32();
    	uint8_t value[len];
    	memcpy(value,buf,len);
    	printk("Recived Data time:%u ms\n",(k_cyc_to_ns_near32(time)/1000000));
    }

    I think using this method to obtain the time should not be affected by the printed data.

  • LandyWang said:
    k_cycle_get_32

    You're right, k_cycle_get_32 should give the cycle count of the RTC at the time the function is executed. This is still using RTC though, which leans on the LF clock, so if you want nano second accuracy this probably wouldn't be sufficient with k_cyc_to_ns_near32.

    If you want higher precision timing, then getting it from the RTOS is not a good option, you should initialize a timer instance of your own using HFCLK. 

    Best regards,

    Simon

  • That's okay, I don't need more precise timing. I just want the timing to be unaffected by printing delays or delays between the nRF and COM ports. It seems that the one currently in use will not be affected.

    I still want to focus on the connection interval of LLPM.

    According to my expectation, as shown in this picture:

    LLPM supports a Connection interval of 1ms, so I should see in the peripheral log that the time interval for each data reception is 1ms.

    But in fact, when the connection interval is set to 1ms, the time interval for peripheral to receive data is 7.5ms.

    That's why I wonder if my understanding of llpm's connection interval is wrong.

  • Hi

    Okay, thank you for explaining. Can you try running just the LLPM samples as is on your end to see if the printed transmission latency is able to report down to 1ms? Also, what kind of environment are you testing in here, and how far apart are the two DKs you're running tests on exactly?

    Best regards,

    Simon

  • The distance between the two DKs is about 3 cm.

    When running the LLPM sample without any modification to the code, the printed transmission latency is about 1ms, which is consistent with the Sample output in README.rst.

    But according to the description on README.rst, transmission latency is the time of Central -> Peripheral -> Central divided by 2.

    According to the information printed out by the LLPM sample, when the connection interval switches to 1ms, the transmission latency is about 1ms. When the connection interval switches to 7.5ms, the transmission latency is about 7.5ms.

    So does it mean that the connection interval = transmission latency = Central -> Peripheral -> Central time divided by 2?

    However, in my understanding, the connection interval refers to the shortest interval between transmitting one data packet and then transmitting the second data packet.

    This is the conclusion I got by adjusting CONFIG_BT_PERIPHERAL_PREF_MIN_INT when using non-LLPM ble code.

    I'm sorry that I didn't describe it clearly before. What I am confused about is which definition of connection interval is.

Related