nRF54L15 - Reflector Crash (BUS FAULT) in Channel Sounding Duration Test (SDK 3.0.2)

We are running a duration test with 10 reflectors and 10 initiators to evaluate the reliability of our system using the nRF54L15 and a project based on the Channel Sounding sample. The original sample project for the reflector part provided by Nordic is configured to reboot after a disconnect event, however, for our application, we require the device not to reboot after disconnect, as this increases power consumption unnecessarily. Instead, we want the reflector to return to a clean slate and be ready for the next connection without a full system reset.

Environment

  • nRF Connect SDK v3.0.2. We are aware there are newer versions available, unfortunately we do not want to upgrade for now due to prior qualification on other parts of the software.
  • Based on sample channel_sounding_ras_reflector with reboot removed

Scenario

Each “gate” (initiator) and each “person” (reflector) perform channel sounding, with reflectors advertising frequently for presence detection. In our duration testing with 10 initiators and 10 reflectors, all in range of each other, we observe that eventually a reflector will crash and reboot. Uptime to failure is between 1 hour and 3 days.

Log Snippet

[Mon Jan 12 09:44:49.398 2026] [00:27:35.711,901] <inf> App_Reflector: CS config creation complete. ID: 0
[Mon Jan 12 09:44:49.398 2026] [00:27:35.911,878] <inf> App_Reflector: CS security enabled.
[Mon Jan 12 09:44:49.398 2026] [00:27:36.161,967] <inf> App_Reflector: CS procedures enabled.
[Mon Jan 12 09:44:55.006 2026] [00:27:41.162,431] <wrn> bt_conn: conn 0x20003f30: not connected
[Mon Jan 12 09:44:55.006 2026] [00:27:41.162,668] <wrn> ras_rrsp: Peer is not subscribed to RAS-CP.
[Mon Jan 12 09:44:55.006 2026] [00:27:41.162,675] <wrn> ras_rrsp: Failed to send indication: -22
[Mon Jan 12 09:44:55.006 2026] [00:27:41.162,715] <inf> App_Reflector: Disconnected (reason 0x13)
                               [00:27:44.222,546] <inf> App_Reflector: Connected to F4:6C:46:3F:D2:03 (random) (err 0x00)
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,900] <err> os: ***** BUS FAULT *****
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,908] <err> os:   Precise data bus error
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,914] <err> os:   BFAR Address: 0x4
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,926] <err> os: r0/a1:  0x20003ba8  r1/a2:  0x00000000  r2/a3:  0x200011dc
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,933] <err> os: r3/a4:  0x00000000 r12/ip:  0xb3199073 r14/lr:  0x00048b83
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,939] <err> os:  xpsr:  0x810002f4
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,944] <err> os: Faulting instruction address (r15/pc): 0x000488dc
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,965] <err> os: >>> ZEPHYR FATAL ERROR 25: Unknown error on CPU 0
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,970] <err> os: Fault during interrupt handling
[Mon Jan 12 09:44:57.233 2026] 
[Mon Jan 12 09:44:57.233 2026] ASSERTION FAIL @ WEST_TOPDIR/zephyr/include/zephyr/spinlock.h:136
[Mon Jan 12 09:44:57.233 2026] [00:27:44.224,995] <err> os: ***** HARD FAULT *****
[Mon Jan 12 09:44:57.233 2026] [00:27:44.225,001] <err> os:   Fault escalation (see below)
[Mon Jan 12 09:44:57.233 2026] [00:27:44.225,007] <err> os: ARCH_EXCEPT with reason 4
[Mon Jan 12 09:44:57.251 2026] 
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,015] <err> os: r0/a1:  0x00000004  r1/a2:  0x00000088  r2/a3:  0x0000000b
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,022] <err> os: r3/a4:  0x00000004 r12/ip:  0x00000036 r14/lr:  0x00048a6d
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,028] <err> os:  xpsr:  0x21000005
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,033] <err> os: Faulting instruction address (r15/pc): 0x0004df00
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,052] <err> os: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,057] <err> os: Fault during interrupt handling
[Mon Jan 12 09:44:57.251 2026] 
[Mon Jan 12 09:44:57.251 2026] [00:27:44.225,073] <err> os: Current thread: 0x20006018 (unknown)
[Mon Jan 12 09:44:57.251 2026] [00:27:44.371,403] <err> fatal_error: Resetting systemø*** Booting MCUboot v2.1.0-dev-ae1ee57f3906 ***
[Mon Jan 12 09:44:57.251 2026] *** Using nRF Connect SDK v3.0.2-89ba1294ac9b ***
[Mon Jan 12 09:44:57.251 2026] *** Using Zephyr OS v4.0.99-f791c49f492c ***
[Mon Jan 12 09:44:57.251 2026] I: Starting bootloader

Observations

  • The crash always follows a connection event when the previous connection cycle ended prematurely.
  • Log shows warnings about connection state and peer not being subscribed to RAS-CP. Likely due to the connection already being disconnected.

Questions

  • Has anyone observed stability issues with the reflector application under high frequency connection/disconnection cycles, specifically with SDK 3.0.2 on nRF54L15
  • What steps are necessary to guarantee the reflector’s state is correctly reset on disconnect, without requiring a full system reboot?

Previous post for extra context: https://devzone.nordicsemi.com/f/nordic-q-a/125500/nrf54l15---perform-channel-sounding-while-reflector-advertises

Kind regards,

Arne

  • Hi again Arne

    Me and a colleague took another look at the connected and disconnected callbacks today. It seems like the clean up isn't done correctly, and the error handling doesn't look great.

    For example, App_Reflector_ConnectionPoll is using the s_connection without checking if it's connected or not already. In the connected_cb() you don't have returns when there are errors, so we're probably missing something here in the logging. It should be something like the following instead (pseudo code, so take it with a pinch of salt):

        if (err) {
            bt_conn_unref(conn);
            connection = NULL;
        } else {
            connection = bt_conn_ref(conn);
            k_timer_start(&App_Reflector_ConnectionTimeoutTimer, K_SECONDS(BLE_CONNECTION_TIMEOUT_SEC), K_NO_WAIT);
            k_sem_give(&sem_connected);
        }

    Best regards,

    Simon

Related