Unreliable Ranging Behavior and Inconsistent Frequency on NORAB126 (NCS v2.9.0)

Hello,

I'm working with Distance Measurement (nrf_dm) using the NORAB126 module on nRF Connect SDK v2.9.0 and nRF Connect SDK v3.0.1.

I'm facing an issue where, occasionally and randomly, the devices stop performing ranging between them. Once this happens, the only way to resume distance measurements is by restarting one or both devices. I've tried adjusting the following configuration parameters, but the problem persists:

  • CONFIG_DM_INITIATOR_DELAY_US

  • CONFIG_DM_MIN_TIME_BETWEEN_TIMESLOTS_US

  • CONFIG_DM_TIMESLOT_QUEUE_LENGTH

  • CONFIG_DM_TIMESLOT_QUEUE_COUNT_SAME_PEER

  • CONFIG_DM_RANGING_OFFSET_US

Despite these changes, the behavior remains the same — ranging between peers stops unexpectedly and only resumes after a reset.

Additionally, I noticed that the distance measurements occur at around 4 times per second when the devices are close. However, the measurement frequency appears to vary randomly.

My questions are:

  1. Is there a known issue that could explain why ranging suddenly stops and only recovers after restarting the device(s)?

  2. Is there a way to set or control the number of distance measurements per second (i.e., define a fixed ranging frequency)?

Any insights or recommendations would be greatly appreciated.

Parents
  • Hi Luis,

    Is there a known issue that could explain why ranging suddenly stops and only recovers after restarting the device(s)?

    Are you seeing this with the default samples for this as well? 

    Are you seeing any info on logs as to why this is happing? If not, could you try increasing the logging level?

    Is there a way to set or control the number of distance measurements per second (i.e., define a fixed ranging frequency)?

    I do not think there is any easy way to change the frequency, unless you make that yourself. Though you can eg. use dm_request_add() at a certain frequency, and control the schedualing with the configurations you mentioned. I might be missing something here, though let us start with the first problem.

    To increase the amount of samples you can queue more. 

    Regards,

    Elfving

Reply
  • Hi Luis,

    Is there a known issue that could explain why ranging suddenly stops and only recovers after restarting the device(s)?

    Are you seeing this with the default samples for this as well? 

    Are you seeing any info on logs as to why this is happing? If not, could you try increasing the logging level?

    Is there a way to set or control the number of distance measurements per second (i.e., define a fixed ranging frequency)?

    I do not think there is any easy way to change the frequency, unless you make that yourself. Though you can eg. use dm_request_add() at a certain frequency, and control the schedualing with the configurations you mentioned. I might be missing something here, though let us start with the first problem.

    To increase the amount of samples you can queue more. 

    Regards,

    Elfving

Children
  • Hi Elfving,

    Tanks for your reply.

    I’d like to clarify the issue I’m seeing when running the default DM ranging sample on a custom board using the NORAB126.


     SDK v3.0.1 (Stable)

    With nRF Connect SDK 3.0.1, the ranging works reliably under all tested conditions:

    • LFCLK source: both CONFIG_CLOCK_CONTROL_NRF_K32SRC_XTAL and CONFIG_CLOCK_CONTROL_NRF_K32SRC_SYNTH

    • Device conditions: close or far, static or in motion

    • No lock-ups or unexpected behavior

    • Currently testing 1:1 ranging.

    • I plan to test 2:2 and 3:3 topologies on Monday to see if the issue scales with more devices.

    This version seems stable regardless of LFCLK configuration.


    ️ SDK v2.9.0 (Unstable)

    With nRF Connect SDK 2.9.0, ranging eventually stops working silently:

    • No output on RTT logs

    • System appears to continue running, but ranging messages stop entirely

    • A power reset on one or both devices is required to restore ranging

    Behavior based on LFCLK source:

    • CONFIG_CLOCK_CONTROL_NRF_K32SRC_SYNTH:
      Issue can happen under any condition (moving/static, close/far).

    • CONFIG_CLOCK_CONTROL_NRF_K32SRC_XTAL:
      Issue seems to occur mainly during movement (not far) of the devices.


    Additional Info

    • I attempted to set CONFIG_LOG_DEFAULT_LEVEL=4, but it resulted in a execution error. I also tried increasing the stack size to improve memory usage, but the error still persisted.

    • SEGGER J-Link V8.34 - Real time terminal output
      SEGGER J-Link (unknown) V1.0, SN=1050200307
      Process: JLinkExe
      [00:00:00.000,488] <dbg> os: setup_thread_stack: stack 0x2000edf0 for thread 0x20006990: obj_size=4096 buf_start=0x2000edf0  buf_size 4096 stack_ptr=0x2000fdf0
      [00:00:00.015,869] <dbg> os: setup_thread_stack: stack 0x20008ee0 for thread 0x20005d80: obj_size=1304 buf_start=0x20008ee0  buf_size 1304 stack_ptr=0x200093f8
      [00:00:00.031,097] <dbg> nrf_dm: serialization_init: Init begin
      [00:00:00.037,719] <dbg> NRF_RPC: nrf_rpc_init: Initializing nRF RPC module
      [00:00:00.045,410] <dbg> NRF_RPC: nrf_rpc_init: Group 'dm_rpc_grp' has id 0
      [00:00:00.053,192] <dbg> os: setup_thread_stack: stack 0x2000b4b0 for thread 0x20006330: obj_size=1024 buf_start=0x2000b4b0  buf_size 1024 stack_ptr=0x2000b8b0
      [00:00:00.068,603] <dbg> os: setup_thread_stack: stack 0x2000b8b0 for thread 0x20006440: obj_size=1024 buf_start=0x2000b8b0  buf_size 1024 stack_ptr=0x2000bcb0
      [00:00:00.084,014] <dbg> os: setup_thread_stack: stack 0x2000bcb0 for thread 0x20006550: obj_size=10[00:00:00.388,305] <dbg> ipc_service: ipc_service_register_endpoint: Register endpoint dm_ept
      [00:00:00.397,644] <dbg> os: z_impl_k_mutex_lock: 0x20006880 took mutex 0x20005708, count: 1, orig prio: 0
      [00:00:00.408,203] <dbg> os: z_impl_k_mutex_unlock: mutex 0x20005708 lock_count: 1
      [00:00:00.416,595] <dbg> os: z_impl_k_mutex_unlock: new owner of mutex 0x20005708: 0 (prio: -1000)
      [00:00:00.426,452] <dbg> os: z_impl_k_mutex_lock: 0x20005818 took mutex 0x20005708, count: 1, orig prio: 0
      [00:00:00.437,011] <dbg> os: z_impl_k_mutex_unlock: mutex 0x20005708 lock_count: 1
      [00:00:00.445,373] <dbg> os: z_impl_k_mutex_unlock: new owner of mutex 0x20005708: 0 (prio: -1000)
      *** Booting nRF Connect SDK v2.9.0-7787b2649840 ***
      *** Using Zephyr OS v3.7.99-1f8f3dc29142 ***
      [00:00:00.464,233] <dbg> os: setup_thread_stack: stack 0x20007d60 for thread 0x20005420: obj_size=2048 buf_start=0x20007d60  buf_size 2048 stack_ptr=0x20008560
      [00:00:00.479,644] <dbg> os: k_sched_unlock: scheduler unlocked (0x20006880:0)
      [00:00:00.487,640] <inf> main: Starting Distance Measurement example
      
      [00:00:00.494,903] <dbg> NRF_RPC: cmd_ctx_alloc: Command context 0 allocated
      [00:00:00.502,716] <dbg> NRF_RPC: nrf_rpc_cmd_common: Sending command 0x00 from group 0x00
      [00:00:00.511,779] <dbg> nrf_rpc_ipc: send: Sending 6 bytes
      [00:00:00.518,035] <dbg> nrf_rpc_ipc: send: Data: 
                                            80 00 ff 00 00 f6                                |......           
      [00:00:00.533,355] <dbg> nrf_rpc_ipc: send: Sent 6 bytes
      [00:00:00.539,367] <dbg> NRF_RPC: wait_for_response: Waiting for a response
      [00:00:00.547,180] <dbg> os: z_impl_k_mutex_lock: 0x20005c40 took mutex 0x20005b30, count: 1, orig prio: 0
      [00:00:00.557,739] <dbg> os: z_impl_k_mutex_unlock: mutex 0x20005b30 lock_count: 1
      [00:00:00.566,131] <dbg> os: z_impl_k_mutex_unlock: new owner of mutex 0x20005b30: 0 (prio: -1000)
      [00:00:00.575,958] <dbg> nrf_rpc_ipc: ept_received: Received
                                            01 ff 00 00 00 00 f6                             |.......          
      [00:00:00.592,193] <dbg> NRF_RPC: receive_handler: Received 7 bytes packet from 255 to 0, type 0x01, cmd/evt/cnt 0xFF, grp 0 (dm_rpc_grp)
      [00:00:00.605,560] <dbg> os: k_sched_unlock: scheduler unlocked (0x20005c40:0)
      [00:00:00.613,616] <dbg> os: k_sched_unlock: scheduler unlocked (0x20006880:0)
      [00:00:00.621,612] <dbg> bt_ddfs: bt_ddfs_init: DDFS initialization successful
      [00:00:00.629,669] <dbg> os: setup_thread_stack: stack 0x200093f8 for thread 0x20005eb0: obj_size=8192 buf_start=0x200093f8  buf_size 8192 stack_ptr=0x2000b3f8
      [00:00:00.645,080] <dbg> bt_hci_driver: bt_ipc_open: 
      [00:00:00.650,787] <dbg> ipc_service: ipc_service_register_endpoint: Register endpoint nrf_bt_hci
      [00:00:00.660,491] <dbg> os: z_impl_k_mutex_lock: 0x20006880 took mutex 0x20005b30, count: 1, orig prio: 0
      [00:00:00.671,051] <dbg> os: z_impl_k_mutex_unlock: mutex 0x20005b30 lock_count: 1
      [00:00:00.679,412] <dbg> os: z_impl_k_mutex_unlock: new owner of mutex 0x20005b30: 0 (prio: -1000)
      [00:00:00.689,300] <dbg> bt_hci_core: bt_hci_cmd_create: opcode 0x0c03 param_len 0
      [00:00:00.697,662] <dbg> bt_hci_core: bt_hci_cmd_create: buf 0x200128e8
      [00:00:00.705,047] <dbg> bt_hci_core: bt_hci_cmd_send_sync: buf 0x200128e8 opcode 0x0c03 len 3
      [00:00:00.714,477] <dbg> bt_hci_core: bt_tx_irq_raise: kick TX
      [00:00:00.721,038] <dbg> bt_hci_core: tx_processor: TX process start
      [00:00:00.728,118] <dbg> bt_hci_core: hci_core_send_cmd: fetch cmd
      [00:00:00.735,015] <dbg> bt_hci_core: hci_core_send_cmd: Sending command 0x0c03 (buf 0x200128e8) to driver
      [00:00:00.745,544] <dbg> bt_hci_core: bt_send: buf 0x200128e8 len 3 type 0
      [00:00:00.753,173] <dbg> bt_hci_driver: bt_ipc_send: buf 0x200128e8 type 0 len 3
      [00:00:00.761,352] <dbg> bt_hci_driver: bt_ipc_send: Final HCI buffer:
                                              01 03 0c 00                                      |....             
      [00:00:00.778,625] <dbg> bt_hci_core: bt_tx_irq_raise: kick TX
      [00:00:00.785,186] <dbg> bt_hci_core: tx_processor: TX process start
      [00:00:00.792,236] <dbg> bt_conn: bt_conn_tx_processor: start
      [00:00:00.798,675] <dbg> bt_conn: bt_conn_tx_processor: no connection wants to do stuff
      [00:00:00.807,556] <dbg> bt_hci_driver: bt_ipc_rx: ipc data:
                                              04 0e 04 01 03 0c 00                             |.......          
      [00:00:00.823,944] <dbg> bt_hci_driver: bt_ipc_evt_recv: len 4
      [00:00:00.830,474] <dbg> bt_hci_driver: bt_ipc_rx: Calling bt_recv(0x20011eac)
      [00:00:00.838,470] <dbg> bt_hci_core: bt_recv_unsafe: buf 0x20011eac len 6
      [00:00:00.846,130] <err> os: ***** USAGE FAULT *****
      [00:00:00.852,020] <err> os:   Stack overflow (context area not valid)
      [00:00:00.859,558] <err> os: r0/a1:  0x00000001  r1/a2:  0x00026857  r2/a3:  0x00000001
      [00:00:00.868,652] <err> os: r3/a4:  0x00026857 r12/ip:  0x000090d1 r14/lr:  0x00000000
      [00:00:00.877,716] <err> os:  xpsr:  0x00000000
      [00:00:00.883,178] <err> os: s[ 0]:  0x0000005b  s[ 1]:  0x0002565f  s[ 2]:  0x00000001  s[ 3]:  0x5b028f31
      [00:00:00.894,134] <err> os: s[ 4]:  0x00000000  s[ 5]:  0x0002664f  s[ 6]:  0x00026647  s[ 7]:  0x000227a7
      [00:00:00.905,090] <err> os: s[ 8]:  0x0001944d  s[ 9]:  0x00026647  s[10]:  0x00000001  s[11]:  0x00026857
      [00:00:00.916,046] <err> os: s[12]:  0x00000001  s[13]:  0x0002d02d  s[14]:  0x0002d02c  s[15]:  0x200087c8
      [00:00:00.926,940] <err> os: fpscr:  0x00000001
      [00:00:00.932,403] <err> os: Faulting instruction address (r15/pc): 0x00000001
      [00:00:00.940,643] <err> os: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0
      [00:00:00.948,913] <err> os: Current thread: 0x20005c40 (mbox_wq #0)
      [00:00:00.956,268] <err> os: Halting system
      

     Questions

    1. Is it reliable to use the LFCLK synthesized from the HFCLK (CONFIG_CLOCK_CONTROL_NRF_K32SRC_SYNTH) for BLE and (DM)? I’d prefer to avoid placing external oscillators on my custom board unless it’s absolutely necessary.

    2. What are the advantages and disadvantages of using SYNTH instead of an external 32kHz crystal (XTAL)? Does SYNTH significantly increase power consumption or affect timing accuracy?


    l'l follow your recommendations regarding the ranging frequency and will get back to you with the results.

    Thanks for your support.

Related