Upgrading through DFU Target results in MPSL Assertion (112, 2142)

Using ncs 2.1.2 on nrf52840. Softdevice controller, bluetooth mesh and bluetooth gatt coexisting application.

I have been having an issue attempting to perform DFU using the dfu_target interface on our nrf52840. For whatever reason, an MPSL assertion is being raised during this process. It is more consistently raised the larger I make the buffer passed through dfu_target_mcuboot_set_buf(). I've enabled the maximum (debug) logging from all the MPSL sources in KConfig, but I have not seen any additional logs. CONFIG_SOC_FLASH_NRF_RADIO_SYNC_MPSL is enabled, so my understanding is that should deal with any synchronization required. A log is shown below.

[00:01:26.549,102] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0007e000
[00:01:27.718,017] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0007f000
[00:01:28.861,694] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00080000
[00:01:29.995,269] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00081000
[00:01:31.164,489] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00082000
[00:01:32.339,294] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00083000
[00:01:33.468,994] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00084000
[00:01:34.672,882] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00085000
[00:01:35.801,696] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00086000
[00:01:36.970,489] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00087000
[00:01:38.137,115] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00088000
[00:01:39.272,125] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00089000
[00:01:40.406,616] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0008a000
[00:01:41.575,500] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0008b000
[00:01:42.738,739] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0008c000
[00:01:43.869,415] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0008d000
[00:01:45.040,130] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0008e000
[00:01:46.189,117] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0008f000
[00:01:47.317,932] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00090000
[00:01:48.473,114] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00091000
[00:01:49.641,021] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00092000
[00:01:50.785,522] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00093000
[00:01:51.945,953] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00094000
[00:01:53.111,816] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00095000
[00:01:54.249,938] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00096000
[00:01:55.416,595] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00097000
[00:01:56.581,695] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00098000
[00:01:57.776,275] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x00099000
[00:01:58.952,484] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0009a000
[00:02:00.088,195] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0009b000
[00:02:01.219,482] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0009c000
[00:02:02.381,866] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0009d000
[00:02:03.550,964] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0009e000
[00:02:04.718,780] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x0009f000
[00:02:05.891,052] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a0000
[00:02:07.029,693] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a1000
[00:02:08.200,744] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a2000
[00:02:09.368,774] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a3000
[00:02:10.502,593] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a4000
[00:02:11.673,065] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a5000
[00:02:12.861,755] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a6000
[00:02:13.996,246] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a7000
[00:02:15.163,269] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a8000
[00:02:16.329,406] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000a9000
[00:02:17.466,461] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000aa000
[00:02:18.637,481] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ab000
[00:02:19.793,518] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ac000
[00:02:20.917,968] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ad000
[00:02:22.090,728] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ae000
[00:02:23.252,349] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000af000
[00:02:24.394,104] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b0000
[00:02:25.563,507] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b1000
[00:02:26.725,860] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b2000
[00:02:27.947,998] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b3000
[00:02:29.115,051] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b4000
[00:02:30.250,396] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b5000
[00:02:31.415,008] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b6000
[00:02:32.586,853] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b7000
[00:02:33.717,224] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b8000
[00:02:34.886,779] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000b9000
[00:02:36.052,154] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ba000
[00:02:37.188,446] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000bb000
[00:02:38.356,781] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000bc000
[00:02:39.524,169] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000bd000
[00:02:40.659,301] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000be000
[00:02:41.822,753] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000bf000
[00:02:43.003,967] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c0000
[00:02:44.174,316] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c1000
[00:02:45.343,017] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c2000
[00:02:46.472,106] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c3000
[00:02:47.609,558] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c4000
[00:02:48.774,597] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c5000
[00:02:49.943,969] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c6000
[00:02:51.076,110] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c7000
[00:02:52.244,079] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c8000
[00:02:53.388,488] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000c9000
[00:02:54.521,545] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ca000
[00:02:55.689,453] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000cb000
[00:02:56.856,262] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000cc000
[00:02:57.995,483] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000cd000
[00:02:59.164,947] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000ce000
[00:03:00.331,756] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000cf000
[00:03:01.470,123] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d0000
[00:03:02.640,655] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d1000
[00:03:03.795,196] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d2000
[00:03:04.929,748] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d3000
[00:03:06.175,964] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d4000
[00:03:07.311,157] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d5000
[00:03:08.481,323] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d6000
[00:03:09.646,759] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d7000
[00:03:10.788,269] <dbg> STREAM_FLASH: stream_flash_erase_page: Erasing page at offset 0x000d8000
[00:03:10.908,721] <err> mpsl_init: MPSL ASSERT: 112, 2142
ASSERTION FAIL [z_spin_lock_valid(l)] @ WEST_TOPDIR/zephyr/include/zephyr/spinlock.h:142
        Recursive spinlock 0x2000a7f0
[00:03:10.908,813] <err> os: ***** HARD FAULT *****
[00:03:10.908,813] <err> os:   Fault escalation (see below)
[00:03:10.908,843] <err> os: ARCH_EXCEPT with reason 4

[00:03:10.908,843] <err> os: r0/a1:  0x00000004  r1/a2:  0x0000008e  r2/a3:  0x00000003
[00:03:10.908,874] <err> os: r3/a4:  0x0000008e r12/ip:  0x00000000 r14/lr:  0x0005d773
[00:03:10.908,874] <err> os:  xpsr:  0x61000018
[00:03:10.908,905] <err> os: Faulting instruction address (r15/pc): 0x000637dc
[00:03:10.908,935] <err> os: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
[00:03:10.908,935] <err> os: Fault during interrupt handling

[00:03:10.908,966] <err> os: Current thread: 0x20003c88 (MPSL Work)
[00:03:11.283,386] <err> fatal_error: Resetting system

Since the MPSL is completely opaque, I really have no idea what's going on here. My best guess is that some time critical behavior is being violated, but I am not sure what. I'm also not sure what to do about it, since the flash erase and write times are essentially out of my control.

Parents
  • Hieu,

    Of course. We started with the Mesh and Peripheral Coexistence sample. We replaced the light button service with our own service (which is fairly similar to the NUS), replaced the mesh model with our own model, added a serial connection to the host processor on our system, and enabled CONFIG_DFU_TARGET_MCUBOOT=y. We then use serial commands to write packets of the update image to flash. These are handled by an application thread which is reading from a ring buffer populated using the async serial API.

    I want to add that we seem to be having issues with delayable work/timeouts in the async serial uarte driver and the bluetooth mesh extended advertiser. Essentially, it seems like the timeout takes an entire RTC counter wrap around to occur. I'm going to trial using the SysTick as a source to see if the behavior is within the RTC timer driver. It may be related, but I don't see the MPSL assertion when we are not writing to flash.

  • Peter,

    I plan to replicate your issue here and try some debugging. It seems I will make do with the Mesh and Peripheral Coexistence sample as is then.

    Could you please give some more details on the "added a serial connection" part? Which peripheral (1/2/3) did you use, and what Kconfig and DTS config did you set it up with?

    If you feel like it can be quite complicated to replicate your issue, please let me know if you want to send us your project.

  • Sorry for such an avalanche of data. I'll try and break down the situation better for clarity below.

    SysTick as Kernel Timer

    This uses the following KConfig fragment to configure the SysTick as the system timer at 10 kHz:

    CONFIG_NRF_RTC_TIMER=n
    CONFIG_CORTEX_M_SYSTICK=y
    CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000
    CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC=64000000

    Bug Description on Proprietary Hardware

    On power on startup with our hardware, I observed that the kernel ticks/time advances much slower than expected (~7-8 times) when using the SysTick as the timer source. This was measured using a heartbeat transmitted by the nRF52840. The heartbeat includes a 64-bit timestamp populated by calling k_uptime_get(). The timestamp also reflected the slowing of the clock. This behavior stopped once the RTT console was connected. It did not change when I disabled the RTT driver entirely using CONFIG_USE_SEGGER_RTT=y. As long as the RTT is connected, I have encountered no issues with using the SysTick for the kernel timer source.

    Using u-blox BMD-345-Eval

    When we mocked our host side serial connection using the u-blox BMD-345-Eval, we did not see this behavior. The timing worked correctly out of the box using the u-blox board and we did not have to connect RTT.

    I have since been able to reproduce this issue, and it seems to be even more pronounced on the u-blox evaluation kit that I'm testing on. The intervals I'm seeing between heartbeats over serial are about 250 seconds now. I'm going to leave the kit running over night.

    Hypothesis

    I think we may have connected one of the JTAG pins incorrectly, either leaving it floating without a required pulldown, leading to a large amount of debug interrupts which interfere with the SysTick.

    I have no idea at this point. It seems like it may be a debug interrupt issue, since very few IRQs should be eligible to interfere with the SysTick IRQ with the right priorities set. 

    nRF RTC as Kernel Timer

    This uses the default configuration of ncs 2.2.0, where the following KConfigs are set by the defaults for the nRF SoC:

    CONFIG_NRF_RTC_TIMER=y
    CONFIG_SYS_CLOCK_TICKS_PER_SEC=32768
    CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC=32768

    Bug Description on Proprietary Hardware

    Longer Kernel Timeouts Affected

    Using the same serial heartbeat, I observe long gaps between transmission which begin at some point after startup. At startup, the interval is as expected. On our hardware, the device never seems to recover for very long after failure. See the Saleae logic analyzer trace from my previous message. This extended interval seems to hover around 630-640 seconds, and I'm not sure why it's different than I've observed with shorter timeout intervals.

    Shorter Kernel Timeouts Affected

    The first place I may have seen this issue has to do with bus idle timeouts using the UARTE async serial driver. Occasionally, the bus idle timeout would stop working, leading to the UART_RX_RDY event not being issued in the serial callback. This resulted in the radio not responding to messages on the bus. After several minutes (anecdotally about 8.5 minutes, suspiciously similar to the 512 seconds observed below), the device would begin responding again, only to fail a short time later. I ended up working around this issue early by decreasing the RX buffer size substantially and having the host side send a keep alive to fill up the buffer regularly. This worked because the UART_RX_RDY is also called when a buffer is filled as well as when the bus idle timeout expires. When I switched to the SysTick during testing this issue went away.

    This also seems to intermittently affect the Bluetooth Mesh extended advertiser delayable work. In the debug log captured and attached above, the work item is rescheduled ~40 ms into the future for the next transmission interval. However, the item isn't actually serviced until just under 512 seconds later (exactly 1 full count on the 24-bit timer used in the nRF RTC timer). It then continues to work correctly. This behavior also went away when I switched to the SysTick.

    The final shorter timeout that seems to be affected seems to be related to the DFU writes. I wasn't really able to discern from the code how an issue with the kernel/system clock would cause the MPSL assertion that started this ticket, but I this bug also went away when I switched to the SysTick.

    Using u-blox BMD-345-Eval

    I was able to reproduce the problem with the heartbeat on the using the u-blox evaluation kit connected to our mocked host over serial. This is reflected in the first JSON snippet I posted where there are two intervals which are excessively long and in that 630-640 second range.

    I have not reproduced the Bluetooth Mesh extended advertiser, serial idle timeout, or DFU/MPSL assertion bugs on this evaluation hardware.

    Hypothesis

    I suspect there are two potential issues with the current nRF RTC timer driver. My understanding of this driver is limited, so I suspect I may have only a piece of the answer for these two issues.

    CC Setting Issues

    set_absolute_alarm() is clearly designed with cases where the CC value is erroneously set either in the past or insufficiently in the future. Due to the high tick rate and the potential for other IRQ sources to preempt the timer, it is possible for the CC value to be in this condition. While this function checks that things are set correctly, the value is used after being passed back up to compare_set_nolocks() as an argument to counter_sub(). In the case where another IRQ is handled at this moment, I think it's possible for target_time to be in the past when written to cc_data[chan].target_time. I have not yet captured this case in action. I don't know what IRQ priorities the MPSL and SD set. As far as I can tell the RTC IRQ priority is set to 1, which is the default provided in nrf_common.dtsi. It certainly has a lot of competition from other driver IRQs at this priority.

    High Tick Rate Issues

    The other impression I got from reading the comments from the latest state of nrf_rtc_timer.c on Zephyr's main branch is that it is possible for set_absolute_alarm() to take a large amount of time, to the point of taking several ticks to leave the while loop successfully (the comment indicates up to 700us). While I appreciate the desire for the tick rate to allow for increased resolution, if the reality is that the driver has to regularly counter-act potentially missing setting the next tick by allowing for several missed settings of the CC value, the increased resolution is no longer actually provided.

    A specific case that I am concerned about is what happens over time. Multiple timers which start on the same tick which end up being scheduled on a later tick will ultimately end up being serviced slightly apart from one another. Depending on how the next interval is calculated, either a rounding/truncation operation from the disconnect between a milliseconds timeout and the 32768 Hz clock or just a lazy implementation like adding a k_sleep() at the end of a thread's main loop may lead to timeouts which should have landed on the same tick being placed across multiple adjacent ticks. In these cases, combined with the apparent potential for long execution times setting the next alarm, there may be a bug where tasks are not serviced correctly. I have not looked into how the kernel and timer code handles a situation like this, but it's definitely a place I would look next.

    A Note on the time_behavior Test Suite

    After re-reading my previous message, I think I may have taken an accusatory tone regarding the change to the test suite behavior and the change that Nordic has made to it with the current state of the nRF RTC timer defaults in mind. I certainly don't agree with the change that has been made, but I don't need to be rude about it or anyone's involvement in it. I'm just frustrated with the situation I'm in.

    Fundamentally, my understanding of the timer_behavior test suite's timer tick train is to ensure that there is sufficient time between adjacent ticks for the rest of the application to run without issue. The default setting of CONFIG_SYS_CLOCK_TICKS_PER_SEC=32768 for the nRF SoCs leave ~1953 core clock cycles between ticks (assuming a 64 MHz core clock). That's enough for a well written IRQ handler, but if the IRQ handler ultimately calls a potentially expensive function like sys_clock_announce() it starts to feel like a tight budget to me.

    Now you might argue that the timer tick train is unrealistic because it expects a significant amount of work to be accomplished between every tick, every time, and most of the time we can expect timeouts to fall on adjacent ticks. However, the test suite does fairly well represent a worse case scenario where timeouts could fall across several adjacent timeouts and be preempted by a higher priority IRQ from a binary blob like the MPSL or SD requiring a very high peak workload between multiple adjacent ticks. If the driver and kernel are written with this sort of peak workload in mind and correctly handles all of the timeouts, this isn't a problem. However, with the way that the next CC value is calculated and set, I am worried that things are not being correctly addressed by the combination of the kernel timeout code and the nRF RTC timer driver. This test isn't perfect for identifying that sort of thing, but it's a start.

    Additionally, the change to the test suite lowers the bar substantially across all platforms. As Zephyr is working towards things like safety certification and power devices, tests like the timer tick train which specifically address the inter-tick system available should accurately reflect that the availability is sufficient. It bothers me that, as of this commit, if I run that test on one of our CAN prototypes it would pass even if more then 50% of CPU time is being consumed by the system clock. That would be absolutely unacceptable on that platform. The test clearly needs improvement, but I don't agree that this is the right way to go. Perhaps this is an issue to raise on the Zephyr repo rather than here.

    Workaround

    I am currently working around this issue by setting the system clock tick rate to a slower rate, in my case by setting CONFIG_SYS_CLOCK_TICKS_PER_SEC=2048. I have not seen any of the bugs listed above for the nRF RTC timer using this setting on either our proprietary hardware or the u-blox evaluation kit setup with a mocked host processor. This is reflected in the second JSON snipped I posted of the heartbeat intervals on the evaluation kit left to run overnight. I don't have a screen capture, but the Saleae trace I took on our hardware also showed no issues. This is the path I'm taking right now to keep development of our MVP features going, but I am still worried there are edge cases not being addressed by the driver (or hardware issues which need to be mitigated).

  • I had a realization that might help illuminate why my application is struggling so much with whatever this bug is.

    The UART_RX_RDY event callback is managed by a k_timeout in combination with a hardware timer. I set this timeout to be equal to 4 bytes of idle time. At 115200 baud this ends up being 312 us, but at the 1 Mbaud rate that we are using the timeout ends up being 28 us. This just happens to be almost exactly 1 tick when the rate is set to 32768 Hz (30.5 us). Since that would mean the timeout should be handled on the next tick, the code in sys_clock_announce() meant to check that condition will always run for a timeout of that length. Maybe I'm accidentally exercising the driver in the same way that the timer tick train does?

  • I did some further testing and was able to reproduce the SysTick issue on the u-blox evaluation kit. If you'd like to recreate the sort of behavior I'm seeing, consider building the following:

    west build --pristine -b nrf52840dk_nrf52840 tests/kernel/timer/timer_behavior -- -DCONFIG_NRF_RTC_TIMER=n -DCONFIG_CORTEX_M_SYSTICK=y -DCONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC=64000000 -DCONFIG_SYS_CLOCK_TICKS_PER_SEC=10000

    For this to build successfully, you will either need to apply a device tree overlay containing the following or modifying dts/arm/nordic/nrf_common.dtsi to enable the systick node.

    &systick {
        status = "okay";
    };

    If you view the serial output, you will notice that it takes an extremely long time to run the tests. However, if you connect to the RTT using J-Link RTT Viewer, the test will run in real time. For whatever reason, it also happens to fail this test. To ensure this works correctly, perform the following procedure:

    1. Build and flash the test.
    2. Power cycle the board.
    3. Connect your favorite serial viewer to VCOM0 to monitor test output.
    4. Whenever you're satisfied that the test is taking way too long to complete, connect to the RTT. I've used the nRF Terminal plugin for VSCode and the J-Link RTT Viewer.

    I have not tried other samples, but I imagine the experience is similar. It may be possible to cobble together a timer based hello_world to use for examining this issue.

  • Hi Peter,

    Thank you very much for the details. My plan was to understand the issue so that I can work with R&D to get some support. Thanks to your comprehensive explanation, I believe I reached that point now.

    However, I also concluded that I will not be able to be efficiently continue with you and R&D due to my lacking knowledge in this area. Thus, instead of going to R&D now, I will find another support engineer with the right expertise to take over the case.

    My sincere apology that it has taken two whole weeks, and the progress is only that much.

    Hieu

  • Hi  ,

    There is an ongoing PR regarding the nrf_rtc_timer to fix a bug with description very similar to yours. Would you like to take a look at it and see if it can help your above problems with the SysTicks?

    Here is the link to it: github.com/.../54780

    Hieu

Reply Children
Related