Beware that this post is related to an SDK in maintenance mode
More Info: Consider nRF Connect SDK for new designs
This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Logging "Backends flushed" while dropping logs is counterproductive.

This issue exists in SDK 15.2 and earlier.

In nrf_log_frontend_dequeue() when memobj cannot be allocated, the backend flush api is used to free memory.  However this memory gets immediately consumed by a log WARNING log "Backends flushed"  which seems counterproductive.  It would be better to set a flag and report the flush at a later time such as when the buffer is empty.

This is the problematic code:

        //Could not allocate memobj - backends are not freeing them on time.
        nrf_log_backend_t const * p_backend = m_log_data.p_backend_head;
        //Flush all backends
        while (p_backend)
        {
            nrf_log_backend_flush(p_backend);
            p_backend = p_backend->p_cb->p_next;
        }
        //logging while dropping logs is counter-productive.
        NRF_LOG_WARNING("Backends flushed");

Parents
  • Hi,

    Thank you for your feedback. Personally, I think the current SDK implementation makes a lot of sense as it it can be very useful to see clearly where in the history logs were flushed. A flag does not do quite the same, as it will not show you where there are missing logs. In any case it is written after the logs were flushed there should still be plenty of room for other logs after writing this short string (if not you could either shorten the string even more or use a larger buffer if you have available memory).

  • Backend flush drops a single log message.  This is not the same as NRF_LOG_FLUSH().  Dropping a single log message and then generating a new log message is counterproductive. 

    For my code base I added an atomic counter for backend_flush() calls and report the number of flushes when the log buffer reaches empty.

Reply Children
No Data
Related