This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Application crash at higher optimization levels

Hi

The application I'm currently working on has three debug settings:

  1. DEBUG preprocessor symbol
  2. Optimization level 0
  3. NRF_LOG at debug level (using RTT)

...and works fine.

But when I increase the optimization level (from 1 to 3) AND change NRF_LOG to info, the application crashes with 'ERROR 3 [NRF_ERROR_INTERNAL]' on the APP_ERROR_CHECK routine after the 'ble_conn_params_init()' function.

This error happens at each NRF_LOG level, except debug, and disappears when I disable nRF_LOG.

Any idea what to look at?

Thanks,

Sebastien

  • Hi,

    Does it also happen on those other log levels when the optimization level is kept at 0?

    We have not experienced any such behavior with the SDK or any examples within it, so it is most likely something that you do in your particular application.

    Errors that arises when you enable optimization usually means that some crucial code is optimized away because to the compiler it looks like it is unnecessary.

    A common mistake that can give behavior similar to this is that you have a loop structure waiting for a variable to change in an interrupt routine. In such cases you should declare the variable volatile, in order to tell the compiler that the variable may change at any time. Otherwise the check may be optimized away entirely, or the variable is kept in a register and not written to or read from memory every time it is used. That way, when the interrupt routine fires and changes the variable, it will not be detected from the other piece of code. In your case it may of course be something completely different, but that is a good first thing to check.

    If the above tips does not lead to any discoveries then I am afraid you must share the code for us to give any pointers to what might be wrong.

    Regards,
    Terje

Related