nRF9151 + NCS 2.9.0: Flash Overflow When Enabling Secondary Partition for OTA (Need Guidance to Reduce Flash Size)

Hi,

I’m working on nRF9151 with nRF Connect SDK 2.9.0, and I’m facing a FLASH overflow issue when enabling dual-image OTA (primary + secondary partitions).
Our project is quite large because we interface multiple peripherals and also use AWS IoT MQTT over cellular.

Hardware & Peripherals Used

  • MAX30001 ECG – SPI1

  • MT29F4G01ABAFDWB NAND Flash – SPI3

  • RTC, LIS2DW12 accelerometer, NPM1300 PMIC – I²C2

  • SiWG917Y Wi-Fi module – UART

  • Cellular modem – LTE-M/NB-IoT

  • AWS IoT (TLS + MQTT)

Flash Usage Summary (from build)

  • Total flash used: ~410 KB

  • Application code: ~120 KB

  • mbedTLS: ~100 KB (very large)

  • Remaining: Zephyr kernel, drivers, libraries, etc.

The Problem

We want to enable two partitions for OTA:

  1. Primary application

  2. Secondary application

But as soon as we enable secondary slot, we get flash overflow at build time.

  • If we disable 2–3 peripherals → flash becomes smaller → build succeeds

  • If we remove secondary partition → build succeeds

  • With all peripherals + secondary slot → build fails due to flash overflow

Extra Notes

  • Using board target: nrf9151 / nrf9151ns

  • Debug option is enabled (increasing size)

  • We are using external NAND flash but not as secondary slot directly (only file system + OTA download buffer)

  • Only primary partition is currently in partitions.yml because secondary can't fit

Our Goal

We need to reduce flash usage by ~30–35 KB so that:

  • Our full application remains intact

  • Secondary MCUboot partition fits in flash

  • Dual-image OTA works properly

What We Need Help With

  1. Recommended methods to reduce flash usage in NCS 2.9

    • Any Kconfig options to shrink Zephyr + mbedTLS

    • Reducing modem/AT libraries

    • Disabling unused kernel subsystems

    • Reducing logging overhead

    • Reducing MQTT/TLS config size?

  2. MCUboot / partitioning suggestions

    • Can we change padding, swap mode, or signature scheme to save flash?

    • Is it possible to use external NAND flash as a secondary slot for single-image OTA?
      (We have a working NAND driver)


      Attachments (for reference)

      I’m attaching:

      • prj.conf

        # log
        CONFIG_LOG=y
        CONFIG_LOG_MODE_DEFERRED=y # for non-bloking mode
        CONFIG_LOG_BUFFER_SIZE=2048
        CONFIG_LOG_PROCESS_THREAD_SLEEP_MS=10
        CONFIG_LOG_PROCESS_THREAD_STACK_SIZE=2048
        CONFIG_CBPRINTF_FP_SUPPORT=y
        
        # LOG ON RTT NOT UART0
        CONFIG_CONSOLE=n #for wifi uart keep this n
        CONFIG_USE_SEGGER_RTT=y
        CONFIG_RTT_CONSOLE=y
        CONFIG_LOG_BACKEND_RTT=y
        CONFIG_LOG_BACKEND_UART=n
        CONFIG_UART_CONSOLE=n
        
        # stack_size
        CONFIG_MAIN_STACK_SIZE=16384
        # CONFIG_MAIN_STACK_SIZE=6144
        # by default 2048
        CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE=6144
        CONFIG_HEAP_MEM_POOL_SIZE=5120
        
        # clock
        CONFIG_CLOCK_CONTROL_NRF_K32SRC_XTAL=y
        CONFIG_CLOCK_CONTROL_NRF_K32SRC_RC=n
        
        # configuration of i2c and spi
        CONFIG_I2C=y
        CONFIG_SPI=y
        CONFIG_SENSOR=y
        # tfm disable for use uart1 as spi1
        CONFIG_TFM_SECURE_UART=n
        CONFIG_TFM_LOG_LEVEL_SILENCE=y
        
        # configuration for LIS2DW12
        CONFIG_LIS2DW12=y
        CONFIG_LIS2DW12_TAP=y
        CONFIG_LIS2DW12_TRIGGER_OWN_THREAD=y
        
        
        # configuration for hw id
        CONFIG_HW_ID_LIBRARY=y
        CONFIG_HW_ID_LIBRARY_SOURCE_DEVICE_ID=y
        
        # configuration for fuel gauge npm1300
        CONFIG_REGULATOR=y
        CONFIG_NRF_FUEL_GAUGE=y
        CONFIG_LED=y
        
        # Date and time configuration
        CONFIG_DATE_TIME=y
        CONFIG_DATE_TIME_UPDATE_INTERVAL_SECONDS=0
        CONFIG_DATE_TIME_TOO_OLD_SECONDS=0
        CONFIG_DATE_TIME_NTP=y
        CONFIG_NETWORKING=y
        CONFIG_NET_SOCKETS=y
        
        # for nand flash
        CONFIG_DISK_ACCESS=y
        
        # systemoff
        CONFIG_PM_DEVICE=y
        CONFIG_POWEROFF=y
        CONFIG_DISABLE_FLASH_PATCH=y
        
        # json
        CONFIG_CJSON_LIB=y
        
        
        # siw917y uart
        CONFIG_SERIAL=y
        CONFIG_UART_ASYNC_API=y
        CONFIG_UART_0_INTERRUPT_DRIVEN=y
        CONFIG_UART_0_ASYNC=y
        CONFIG_NRFX_UARTE0=y
        
        
        # nvs
        CONFIG_NVS=y
        CONFIG_FLASH=y
        CONFIG_FLASH_MAP=y
        CONFIG_FLASH_PAGE_LAYOUT=y
        
        ##########################################comment for dfu disable
        # Enable mcumgr DFU in application
        # # Enable MCUMGR 
        # CONFIG_MCUMGR=y
        
        # # Enable MCUMGR management for both OS and Images
        # CONFIG_MCUMGR_GRP_OS=y
        # CONFIG_MCUMGR_GRP_IMG=y
        
        # # Configure MCUMGR transport to UART
        # CONFIG_MCUMGR_TRANSPORT_UART=y
        # # Configure dependencies for CONFIG_MCUMGR_GRP_IMG  
        # CONFIG_IMG_MANAGER=y
        
        # # Configure dependencies for CONFIG_IMG_MANAGER  
        # CONFIG_STREAM_FLASH=y
        
        # CONFIG_DFU_TARGET=y
        # CONFIG_DFU_TARGET_MCUBOOT=y
        
        ################################################
        
        
        # Dependencies
        # Configure dependencies for CONFIG_MCUMGR  
        CONFIG_NET_BUF=y
        CONFIG_ZCBOR=y
        CONFIG_CRC=y
        
        
        
        # Configure dependencies for CONFIG_MCUMGR_TRANSPORT_UART 
        CONFIG_BASE64=y
        
        CONFIG_REBOOT=y
        
        
        
        
        # // ==================================cellular part started=====================================*/
        
        # Networking
        # CONFIG_NETWORKING=y
        # CONFIG_NET_SOCKETS=y
        CONFIG_NET_SOCKETS_SOCKOPT_TLS=y
        CONFIG_NET_UDP=y
        CONFIG_NET_TCP=y
        CONFIG_NET_IPV4=y
        # CONFIG_NET_IPV6=y
        
        # DNS
        CONFIG_DNS_RESOLVER=y
        CONFIG_DNS_RESOLVER_ADDITIONAL_BUF_CTR=2
        CONFIG_DNS_RESOLVER_MAX_SERVERS=1
        CONFIG_DNS_SERVER_IP_ADDRESSES=y
        CONFIG_DNS_SERVER1="8.8.8.8"
        CONFIG_NET_SOCKETS_DNS_TIMEOUT=5000
        
        CONFIG_JSON_LIBRARY=y
        
        # AWS IoT MQTT
        CONFIG_AWS_IOT_LOG_LEVEL_DBG=y
        CONFIG_AWS_TEST_SUITE_DQP=n
        CONFIG_MQTT_LIB=y
        CONFIG_MQTT_LIB_TLS=y
        CONFIG_MQTT_KEEPALIVE=600
        CONFIG_MQTT_LIB_TLS_USE_ALPN=y
        
        # TLS (nRF Security only)
        CONFIG_NRF_SECURITY=y
        CONFIG_MBEDTLS_TLS_LIBRARY=y
        CONFIG_MBEDTLS_LEGACY_CRYPTO_C=y
        CONFIG_MBEDTLS_ENABLE_HEAP=y
        CONFIG_MBEDTLS_HEAP_SIZE=4096
        CONFIG_MBEDTLS_SSL_MAX_CONTENT_LEN=8192
        CONFIG_MBEDTLS_PEM_CERTIFICATE_FORMAT=y
        CONFIG_MBEDTLS_SERVER_NAME_INDICATION=y
        CONFIG_MBEDTLS_AES_ROM_TABLES=y
        CONFIG_MBEDTLS_TLS_VERSION_1_2=y
        CONFIG_MBEDTLS_MEMORY_DEBUG=y
        CONFIG_MBEDTLS_HAVE_TIME_DATE=y
        CONFIG_MBEDTLS_SSL_ALPN=y
        CONFIG_MBEDTLS_SSL_CLI_C=y
        CONFIG_MBEDTLS_X509_CRT_PARSE_C=y
        CONFIG_MBEDTLS_KEY_EXCHANGE_ECDHE_ECDSA_ENABLED=y
        CONFIG_MBEDTLS_KEY_EXCHANGE_ECDH_ECDSA_ENABLED=y
        CONFIG_MBEDTLS_KEY_EXCHANGE_PSK_ENABLED=y
        CONFIG_MBEDTLS_KEY_EXCHANGE_ECDHE_PSK_ENABLED=y
        CONFIG_MBEDTLS_SSL_SRV_C=y
        CONFIG_MBEDTLS_KEY_EXCHANGE_RSA_ENABLED=y
        CONFIG_MBEDTLS_CIPHER=y
        CONFIG_MBEDTLS_MD=y
        CONFIG_MBEDTLS_PK_C=y
        CONFIG_MBEDTLS_PK_PARSE_C=y
        CONFIG_MBEDTLS_PK_WRITE_C=y
        CONFIG_MBEDTLS_RSA_C=y
        CONFIG_MBEDTLS_PKCS1_V15=y
        CONFIG_MBEDTLS_ECP_C=y
        CONFIG_MBEDTLS_ECDSA_C=y
        CONFIG_MBEDTLS_ECDH_C=y
        CONFIG_MBEDTLS_DHM_C=y
        CONFIG_MBEDTLS_GCM_C=y
        CONFIG_MBEDTLS_SHA256_C=y
        CONFIG_MBEDTLS_X509_USE_C=y
        CONFIG_MBEDTLS_SSL_CONTEXT_SERIALIZATION=y
        CONFIG_MBEDTLS_SSL_PROTO_TLS1_2=y
        
        # LTE/Modem
        CONFIG_NRF_MODEM_LIB=y
        CONFIG_LTE_LINK_CONTROL=y
        
        # Sockets offload for modem (if using LTE for AWS)
        CONFIG_NET_NATIVE=n
        CONFIG_NET_SOCKETS_OFFLOAD=y
        
        # AWS IoT MQTT Helper
        CONFIG_MQTT_HELPER_SEC_TAG=212
        
        CONFIG_MODEM_INFO=y
        
        CONFIG_BOOTLOADER_MCUBOOT=y
        
        CONFIG_DEBUG_OPTIMIZATIONS=y
        CONFIG_DEBUG_THREAD_INFO=y
        
        

      • cel_aws.c (AWS driver code)

        /**
         * cel_aws.c
         * Cellular AWS IoT Driver - Optimized for performance
         */
        
        #include "cel_aws.h"
        
        #include <zephyr/logging/log.h>
        #include <zephyr/net/socket.h>
        #include <zephyr/net/dns_resolve.h>
        #include <zephyr/net/mqtt.h>
        #include <zephyr/net/tls_credentials.h>
        #include <zephyr/random/random.h>
        #include <modem/lte_lc.h>
        #include <modem/nrf_modem_lib.h>
        #include <string.h>
        #include <stdio.h>
        
        #include <siwg917y.h>
        #include <modem/modem_info.h>
        
        LOG_MODULE_REGISTER(cel_aws, LOG_LEVEL_DBG);
        
        /* Configurable constants */
        #define AWS_BROKER_PORT CONFIG_AWS_MQTT_PORT /* AWS MQTT broker port number */
        
        /* Optimized buffer sizes */
        #define MQTT_RX_BUFFER_SIZE (512 * 4) /* MQTT RX buffer size (2 KB) */
        #define MQTT_TX_BUFFER_SIZE (512 * 4) /* MQTT TX buffer size (2 KB) */
        #define APP_BUFFER_SIZE 256           /* Temporary application buffer size */
        
        /* Retry and backoff parameters */
        #define MAX_RETRIES 10           /* Maximum number of MQTT connection attempts */
        #define BACKOFF_EXP_BASE_MS 1000 /* Base delay for exponential backoff (ms) */
        #define BACKOFF_EXP_MAX_MS 60000 /* Maximum exponential backoff delay (ms) */
        #define BACKOFF_CONST_MS 5000    /* Constant backoff delay when exponential mode is disabled */
        
        /* TLS configuration */
        static const sec_tag_t sec_tls_tags[] = {212}; /* Security tag referencing stored TLS credentials */
        
        #if (CONFIG_AWS_MQTT_PORT == 443 && !defined(CONFIG_MQTT_LIB_WEBSOCKET))
        static const char *const alpn_list[] = {"x-amzn-mqtt-ca"}; /* ALPN protocol list for AWS MQTT over port 443 */
        #endif
        
        /* MQTT client context and buffers */
        static struct mqtt_client client_ctx;          /* Global MQTT client context */
        static uint8_t rx_buffer[MQTT_RX_BUFFER_SIZE]; /* MQTT RX buffer used by the client */
        static uint8_t tx_buffer[MQTT_TX_BUFFER_SIZE]; /* MQTT TX buffer used by the client */
        static uint8_t app_buffer[APP_BUFFER_SIZE];    /* Intermediate buffer for incoming payload chunks */
        static struct sockaddr_in broker_addr;         /* Resolved AWS broker IPv4 address */
        
        /* User-defined callback */
        static cel_aws_rx_cb_t user_rx_cb = NULL; /* User-registered MQTT message RX callback */
        
        /* LTE state synchronization */
        static K_SEM_DEFINE(lte_ready, 0, 1); /* Semaphore indicating LTE registration readiness */
        
        /* MQTT socket descriptor */
        static int mqtt_sock = -1; /* MQTT socket file descriptor (-1 = invalid) */
        
        /* State flags */
        static bool mqtt_connected = false;                                   /* Flag indicating MQTT connection status */
        static bool mqtt_subscribed = false;                                  /* Flag indicating MQTT subscription status */
        static bool pending_subscribe = false;                                /* Flag indicating pending subscription request */
        static const char *pending_subscribe_topic = NULL;                    /* Topic to subscribe once connected */
        static enum mqtt_qos pending_subscribe_qos = MQTT_QOS_0_AT_MOST_ONCE; /* Stored QoS for pending subscription */
        
        /* Forward declarations */
        
        /**
         * @brief LTE link controller event handler.
         *
         * This callback is invoked by the LTE Link Controller whenever an LTE-related
         * event occurs. It processes key events such as network registration status
         * updates and RRC mode changes.
         *
         * When the device successfully registers on a home or roaming LTE network,
         * the function logs the registration state and releases the @ref lte_ready
         * semaphore to signal that LTE connectivity is established.
         *
         * RRC mode transition events are logged for debugging and performance insight.
         *
         * @param[in] evt  Pointer to the LTE event structure received from the
         *                 LTE Link Controller.
         */
        static void lte_event_handler(const struct lte_lc_evt *evt);
        
        /**
         * @brief MQTT event callback handler for AWS IoT communication.
         *
         * This function is invoked by the MQTT client library whenever an MQTT event
         * occurs. It processes connection acknowledgments, publish messages, subscription
         * acknowledgments, and disconnect events. Internal state flags are updated
         * accordingly, and user-defined receive callbacks are invoked when publish
         * payloads are received.
         *
         * **Event handling:**
         * - **MQTT_EVT_CONNACK:**
         *   Marks the MQTT client as connected and triggers automatic re-subscription
         *   if a pending subscription exists.
         *
         * - **MQTT_EVT_PUBLISH:**
         *   Logs topic, QoS, and payload size, then reads the payload in chunks using
         *   @ref mqtt_read_publish_payload_blocking. If a user RX callback is set via
         *   @ref cel_aws_set_rx_callback, it is invoked for each payload block.
         *   Sends PUBACK for QoS 1 messages.
         *
         * - **MQTT_EVT_SUBACK:**
         *   Confirms successful subscription and clears pending subscription state.
         *
         * - **MQTT_EVT_DISCONNECT:**
         *   Resets all MQTT connection and subscription state flags.
         *
         * @param[in] client  Pointer to the MQTT client instance.
         * @param[in] evt     Pointer to the MQTT event structure containing the event
         *                    type and associated parameters.
         */
        static void mqtt_event_handler(struct mqtt_client *client, const struct mqtt_evt *evt);
        
        /**
         * @brief Resolve the AWS IoT broker hostname to an IP address.
         *
         * This function performs DNS resolution for the configured AWS IoT endpoint
         * and retrieves the corresponding IPv4 address and port. The resolved address
         * is stored in the global @ref broker_addr structure for use during the MQTT
         * connection process.
         *
         * The function uses @ref getaddrinfo for hostname resolution and logs both
         * successful and failed lookups. If the resolution fails, an appropriate error
         * status is returned.
         *
         * @return CEL_AWS_SUCCESS if the broker hostname is resolved successfully,
         *         CEL_AWS_ERROR_RESOLVE if DNS resolution fails.
         */
        static cel_aws_status_t resolve_broker(void);
        
        /**
         * @brief Calculate the backoff delay for retry attempts.
         *
         * This function computes the delay to wait before retrying an AWS MQTT
         * connection attempt. If exponential backoff is enabled via
         * `CONFIG_AWS_EXPONENTIAL_BACKOFF`, the delay grows exponentially based on
         * the retry attempt count, capped at @ref BACKOFF_EXP_MAX_MS. A random value
         * within the calculated backoff window is returned to reduce retry collisions.
         *
         * If exponential backoff is disabled, a fixed backoff delay defined by
         * @ref BACKOFF_CONST_MS is returned.
         *
         * @param[in] attempt  The current retry attempt number (starting at 0).
         *
         * @return The backoff delay in milliseconds.
         */
        static int backoff_wait(uint32_t attempt);
        
        /**
         * @brief Initialize and configure the AWS IoT MQTT client instance.
         *
         * This function sets up the MQTT client context with all required parameters
         * for establishing a secure connection to the AWS IoT broker. It initializes
         * the MQTT client structure, configures the broker address, client identity,
         * MQTT protocol settings, and assigns RX/TX buffers.
         *
         * TLS security parameters are configured using the provided security tags,
         * enforcing peer verification and setting the expected AWS endpoint hostname.
         * If port 443 is used without WebSocket mode, ALPN protocols are configured
         * to support MQTT over TLS.
         *
         * This function must be called before attempting to connect to AWS IoT using
         * @ref aws_client_try_connect.
         */
        static void aws_client_setup(void);
        
        /**
         * @brief Attempt to establish a connection to the AWS IoT MQTT broker.
         *
         * This function initiates a connection to the AWS IoT broker using
         * @ref mqtt_connect. If the connection attempt fails, it performs retries
         * using an exponential or fixed backoff delay, depending on the configuration.
         *
         * On a successful connection request, the MQTT socket is stored internally
         * for later use by the MQTT processing functions.
         *
         * The function tries up to @ref MAX_RETRIES attempts before returning a
         * failure status.
         *
         * @return CEL_AWS_SUCCESS if the MQTT connection request is sent successfully,
         *         CEL_AWS_ERROR_MQTT_CONNECT if all connection attempts fail.
         */
        static cel_aws_status_t aws_client_try_connect(void);
        
        /**
         * @brief Wait for the AWS MQTT client to complete the connection process.
         *
         * This function periodically processes MQTT events and checks whether the
         * client has established a successful connection with the AWS IoT broker.
         * It continues polling until either the connection flag is set or the
         * specified timeout period elapses.
         *
         * The function relies on @ref cel_aws_process to handle incoming MQTT
         * packets and connection acknowledgments during the waiting period.
         *
         * @param[in] timeout_ms  Maximum time in milliseconds to wait for a
         *                        successful MQTT connection.
         *
         * @return CEL_AWS_SUCCESS if the client connects within the timeout period,
         *         CEL_AWS_ERROR_TIMEOUT if the connection does not complete in time.
         */
        static cel_aws_status_t wait_connected(int timeout_ms);
        
        /**
         * @brief Wait for confirmation of an MQTT subscription.
         *
         * This function polls for incoming MQTT events while monitoring the internal
         * subscription status flag. It repeatedly processes MQTT packets until the
         * broker acknowledges the subscription or the specified timeout expires.
         *
         * The function relies on @ref cel_aws_process to handle MQTT SUBACK packets
         * during the wait period.
         *
         * @param[in] timeout_ms  Maximum time in milliseconds to wait for the
         *                        subscription acknowledgment.
         *
         * @return CEL_AWS_SUCCESS if the subscription is confirmed,
         *         CEL_AWS_ERROR_TIMEOUT if no confirmation is received in time.
         */
        static cel_aws_status_t wait_subscribed(int timeout_ms);
        
        /* === LTE Functions === */
        int cel_lte_get_signal(int16_t *rsrp)
        {
            int err;
        
            err = modem_info_init();
            if (err)
            {
                LOG_ERR("modem_info_init failed: %d", err);
                return err;
            }
        
            if (rsrp)
            {
                err = modem_info_get_rsrp(rsrp);
                if (err)
                {
                    LOG_ERR("modem_info_get_rsrp failed: %d", err);
                    return err;
                }
            }
        
            if (rsrp) {
                rssi = *rsrp;  
            }
        
            LOG_INF("Signal: RSRP=%d  rssi %d",
                    rsrp ? *rsrp : 0,rssi);
            return 0;
        }
        
        cel_aws_status_t cel_lte_connect(void)
        {
            int err;
        
            err = nrf_modem_lib_init();
            if (err)
            {
                LOG_ERR("Modem lib init failed: %d", err);
                if (!READ_ERROR(HW_CELLULAR_ERROR))
                {
                    SET_ERROR(HW_CELLULAR_ERROR);
                }
                return CEL_AWS_ERROR_LTE;
            }
            else
            {
                if (READ_ERROR(HW_CELLULAR_ERROR))
                {
                    CLR_ERROR(HW_CELLULAR_ERROR);
                }
            }
        
            /* Using NB-IoT as in the working code */
            err = lte_lc_system_mode_set(LTE_LC_SYSTEM_MODE_NBIOT,
                                         LTE_LC_SYSTEM_MODE_PREFER_AUTO);
            if (err)
            {
                LOG_ERR("Failed to set NB-IoT mode: %d", err);
                return CEL_AWS_ERROR_LTE;
            }
        
            err = lte_lc_connect_async(lte_event_handler);
            if (err)
            {
                LOG_ERR("LTE connect async failed: %d", err);
                if (!READ_ERROR(HW_CELLULAR_ERROR))
                {
                    SET_ERROR(HW_CELLULAR_ERROR);
                }
                return CEL_AWS_ERROR_LTE;
            }
            else
            {
                if (READ_ERROR(HW_CELLULAR_ERROR))
                {
                    CLR_ERROR(HW_CELLULAR_ERROR);
                }
            }
        
            LOG_INF("Waiting for LTE connection...");
            k_sem_take(&lte_ready, K_FOREVER);
            LOG_INF("LTE Connected");
        
            int16_t rsrp;
        
            err = cel_lte_get_signal(&rsrp);
            if (err)
            {
                LOG_ERR("LTE get signal strength: %d", err);
            }
        
            return CEL_AWS_SUCCESS;
        }
        
        cel_aws_status_t cel_lte_disconnect(void)
        {
            int err1, err2;
        
            err1 = lte_lc_power_off();
            err2 = nrf_modem_lib_shutdown();
        
            if (err1 != 0 || err2 != 0)
            {
                LOG_ERR("LTE disconnect failed: lte_lc_power_off= %d, nrf_modem_lib_shutdown= %d",
                        err1, err2);
                return CEL_AWS_ERROR_LTE;
            }
        
            LOG_INF("LTE disconnected and modem powered off successfully");
            return CEL_AWS_SUCCESS;
        }
        
        static void lte_event_handler(const struct lte_lc_evt *evt)
        {
            switch (evt->type)
            {
            case LTE_LC_EVT_NW_REG_STATUS:
                if ((evt->nw_reg_status == LTE_LC_NW_REG_REGISTERED_HOME) ||
                    (evt->nw_reg_status == LTE_LC_NW_REG_REGISTERED_ROAMING))
                {
                    LOG_INF("Network registration status: %s",
                            evt->nw_reg_status == LTE_LC_NW_REG_REGISTERED_HOME ? "Connected - home network" : "Connected - roaming");
                    k_sem_give(&lte_ready);
                }
                break;
            case LTE_LC_EVT_RRC_UPDATE:
                LOG_INF("RRC mode: %s", evt->rrc_mode == LTE_LC_RRC_MODE_CONNECTED ? "Connected" : "Idle");
                break;
            default:
                break;
            }
        }
        
        /* === AWS MQTT Functions === */
        
        static cel_aws_status_t resolve_broker(void)
        {
            int ret;
            struct addrinfo *ai = NULL;
            char port_string[6] = {0};
        
            const struct addrinfo hints = {
                .ai_family = AF_INET,
                .ai_socktype = SOCK_STREAM,
                .ai_protocol = 0,
            };
        
            sprintf(port_string, "%d", AWS_BROKER_PORT);
            ret = getaddrinfo(CONFIG_AWS_ENDPOINT, port_string, &hints, &ai);
            if (ret == 0)
            {
                char addr_str[INET_ADDRSTRLEN];
        
                memcpy(&broker_addr, ai->ai_addr,
                       MIN(ai->ai_addrlen, sizeof(struct sockaddr_storage)));
        
                inet_ntop(AF_INET, &broker_addr.sin_addr, addr_str, sizeof(addr_str));
                LOG_INF("Resolved: %s:%u", addr_str, htons(broker_addr.sin_port));
            }
            else
            {
                LOG_ERR("Failed to resolve hostname err = %d (errno = %d)", ret, errno);
                return CEL_AWS_ERROR_RESOLVE;
            }
        
            freeaddrinfo(ai);
            return CEL_AWS_SUCCESS;
        }
        
        static void aws_client_setup(void)
        {
            mqtt_client_init(&client_ctx);
        
            client_ctx.broker = &broker_addr;
            client_ctx.evt_cb = mqtt_event_handler;
        
            client_ctx.client_id.utf8 = (uint8_t *)CONFIG_AWS_THING_NAME;
            client_ctx.client_id.size = strlen(CONFIG_AWS_THING_NAME);
            client_ctx.password = NULL;
            client_ctx.user_name = NULL;
        
            client_ctx.keepalive = CONFIG_MQTT_KEEPALIVE;
            client_ctx.protocol_version = MQTT_VERSION_3_1_1;
        
            client_ctx.rx_buf = rx_buffer;
            client_ctx.rx_buf_size = sizeof(rx_buffer);
            client_ctx.tx_buf = tx_buffer;
            client_ctx.tx_buf_size = sizeof(tx_buffer);
        
            /* TLS configuration */
            client_ctx.transport.type = MQTT_TRANSPORT_SECURE;
            struct mqtt_sec_config *tls_config = &client_ctx.transport.tls.config;
        
            tls_config->peer_verify = TLS_PEER_VERIFY_REQUIRED;
            tls_config->cipher_list = NULL;
            tls_config->sec_tag_list = sec_tls_tags;
            tls_config->sec_tag_count = ARRAY_SIZE(sec_tls_tags);
            tls_config->hostname = CONFIG_AWS_ENDPOINT;
            tls_config->cert_nocopy = TLS_CERT_NOCOPY_NONE;
        
        #if (CONFIG_AWS_MQTT_PORT == 443 && !defined(CONFIG_MQTT_LIB_WEBSOCKET))
            tls_config->alpn_protocol_name_list = alpn_list;
            tls_config->alpn_protocol_name_count = ARRAY_SIZE(alpn_list);
        #endif
        }
        
        static cel_aws_status_t aws_client_try_connect(void)
        {
            int ret;
            uint32_t backoff_ms;
        
            for (uint32_t attempt = 0; attempt <= MAX_RETRIES; attempt++)
            {
                ret = mqtt_connect(&client_ctx);
                if (ret == 0)
                {
                    mqtt_sock = client_ctx.transport.tcp.sock;
                    LOG_INF("AWS MQTT Connection request sent");
                    return CEL_AWS_SUCCESS;
                }
        
                backoff_ms = backoff_wait(attempt);
                LOG_ERR("Failed to connect: %d backoff delay: %u ms", ret, backoff_ms);
                k_msleep(backoff_ms);
            }
        
            return CEL_AWS_ERROR_MQTT_CONNECT;
        }
        
        cel_aws_status_t cel_aws_connect(int timeout_ms)
        {
            cel_aws_status_t ret;
        
            /* Reset state flags */
            mqtt_connected = false;
            mqtt_subscribed = false;
            pending_subscribe = false;
            pending_subscribe_topic = NULL;
        
            ret = resolve_broker();
            if (ret != CEL_AWS_SUCCESS)
            {
                return ret;
            }
        
            aws_client_setup();
        
            ret = aws_client_try_connect();
            if (ret != CEL_AWS_SUCCESS)
            {
                return ret;
            }
        
            /* Wait for connection to be established */
            return wait_connected(timeout_ms);
        }
        
        cel_aws_status_t cel_aws_disconnect(void)
        {
            if (mqtt_sock >= 0)
            {
                int ret = mqtt_disconnect(&client_ctx);
                if (ret != 0)
                {
                    LOG_ERR("MQTT disconnect failed: %d", ret);
                    return CEL_AWS_ERROR_MQTT_DISCONNECT;
                }
        
                close(mqtt_sock);
                mqtt_sock = -1;
                mqtt_connected = false;
                mqtt_subscribed = false;
                pending_subscribe = false;
                pending_subscribe_topic = NULL;
                LOG_INF("AWS MQTT Disconnected successfully");
                return CEL_AWS_SUCCESS;
            }
        
            LOG_INF("AWS MQTT already disconnected");
            return CEL_AWS_SUCCESS;
        }
        
        cel_aws_status_t cel_aws_subscribe(const char *topic, enum mqtt_qos qos, int timeout_ms)
        {
            if (!mqtt_connected)
            {
                LOG_ERR("Cannot subscribe: MQTT not connected");
                return CEL_AWS_ERROR_NOT_CONNECTED;
            }
        
            struct mqtt_topic topics[] = {{.topic = {
                                               .utf8 = (uint8_t *)topic,
                                               .size = strlen(topic)},
                                           .qos = qos}};
        
            const struct mqtt_subscription_list sub_list = {
                .list = topics,
                .list_count = ARRAY_SIZE(topics),
                .message_id = (uint16_t)sys_rand32_get()};
        
            LOG_INF("Subscribing to %s", topic);
            int ret = mqtt_subscribe(&client_ctx, &sub_list);
            if (ret != 0)
            {
                LOG_ERR("Failed to subscribe to topic: %d", ret);
                return CEL_AWS_ERROR_MQTT_SUBSCRIBE;
            }
        
            pending_subscribe = true;
            pending_subscribe_topic = topic;
            pending_subscribe_qos = qos;
        
            /* Wait for subscription confirmation */
            return wait_subscribed(timeout_ms);
        }
        
        static cel_aws_status_t cel_aws_publish_req_sent(const char *topic, const uint8_t *payload,
                                                         size_t payload_len, enum mqtt_qos qos)
        {
            if (!mqtt_connected)
            {
                LOG_ERR("Cannot publish: MQTT not connected");
                return CEL_AWS_ERROR_NOT_CONNECTED;
            }
        
            static uint32_t message_id = 1u;
        
            struct mqtt_publish_param param = {
                .message.topic.topic.utf8 = (uint8_t *)topic,
                .message.topic.topic.size = strlen(topic),
                .message.topic.qos = qos,
                .message.payload.data = (uint8_t *)payload,
                .message.payload.len = payload_len,
                .message_id = message_id++,
                .retain_flag = 0};
        
            int ret = mqtt_publish(&client_ctx, &param);
            if (ret == 0)
            {
                LOG_INF("PUBLISHED on topic \"%s\" [id: %u qos: %u], payload: %u B",
                        topic, param.message_id, qos, payload_len);
                return CEL_AWS_SUCCESS;
            }
            else
            {
                LOG_ERR("Failed to publish message: %d", ret);
                return CEL_AWS_ERROR_MQTT_PUBLISH;
            }
        }
        
        cel_aws_status_t cel_aws_publish(const char *topic, const uint8_t *payload,
                                         size_t payload_len, enum mqtt_qos qos)
        {
            cel_aws_status_t ret = cel_aws_publish_req_sent(topic, payload, payload_len, qos);
            if (ret != CEL_AWS_SUCCESS)
            {
                return ret;
            }
        
            int16_t rsrp;
        
            int err = cel_lte_get_signal(&rsrp);
            if (err)
            {
                LOG_ERR("LTE get signal strength: %d", err);
            }
            /* Process MQTT events for the default processing time */
            int64_t start_time = k_uptime_get();
            while (k_uptime_get() - start_time < DEFAULT_PUBLISH_PROCESSING_MS)
            {
                cel_aws_process(100);
                k_msleep(50);
            }
        
            return CEL_AWS_SUCCESS;
        }
        
        int cel_aws_process(int timeout_ms)
        {
            if (mqtt_sock < 0)
            {
                return -ENOTCONN;
            }
        
            struct pollfd fds = {
                .fd = mqtt_sock,
                .events = POLLIN};
        
            int ret = poll(&fds, 1, timeout_ms);
            if (ret < 0)
            {
                /* Don't log poll errors for timeouts, only for real errors */
                if (errno != EAGAIN && errno != EINTR)
                {
                    LOG_ERR("poll failed: %d", ret);
                }
                return -1;
            }
        
            if (ret > 0)
            {
                if (fds.revents & POLLIN)
                {
                    ret = mqtt_input(&client_ctx);
                    if (ret != 0)
                    {
                        LOG_ERR("Failed to read MQTT input: %d", ret);
                        return -1;
                    }
                }
        
                if (fds.revents & (POLLHUP | POLLERR))
                {
                    LOG_ERR("Socket closed/error");
                    return -1;
                }
            }
        
            ret = mqtt_live(&client_ctx);
            if ((ret != 0) && (ret != -EAGAIN))
            {
                LOG_ERR("Failed to live MQTT: %d", ret);
                return -1;
            }
        
            return 0;
        }
        
        void cel_aws_set_rx_callback(cel_aws_rx_cb_t cb)
        {
            user_rx_cb = cb;
        }
        
        bool cel_aws_is_connected(void)
        {
            return mqtt_connected;
        }
        
        bool cel_aws_is_subscribed(void)
        {
            return mqtt_subscribed;
        }
        
        /* === Helper Functions === */
        
        static cel_aws_status_t wait_connected(int timeout_ms)
        {
            int64_t start_time = k_uptime_get();
        
            while (k_uptime_get() - start_time < timeout_ms)
            {
                if (mqtt_connected)
                {
                    LOG_INF("AWS MQTT Connected successfully");
                    return CEL_AWS_SUCCESS;
                }
                cel_aws_process(100);
                k_msleep(50);
            }
        
            LOG_ERR("AWS MQTT Connection timeout");
            return CEL_AWS_ERROR_TIMEOUT;
        }
        
        static cel_aws_status_t wait_subscribed(int timeout_ms)
        {
            int64_t start_time = k_uptime_get();
        
            while (k_uptime_get() - start_time < timeout_ms)
            {
                if (mqtt_subscribed)
                {
                    LOG_INF("Subscription confirmed successfully");
                    return CEL_AWS_SUCCESS;
                }
                cel_aws_process(100);
                k_msleep(50);
            }
        
            LOG_ERR("Subscription timeout");
            return CEL_AWS_ERROR_TIMEOUT;
        }
        
        /* === Event Handlers === */
        
        static void mqtt_event_handler(struct mqtt_client *client, const struct mqtt_evt *evt)
        {
            LOG_DBG("MQTT event: %d result: %d", evt->type, evt->result);
        
            switch (evt->type)
            {
            case MQTT_EVT_CONNACK:
                LOG_INF("MQTT CONNACK received");
                mqtt_connected = true;
        
                /* Auto-resubscribe if we have a pending subscription */
                if (pending_subscribe_topic != NULL)
                {
                    cel_aws_subscribe(pending_subscribe_topic, pending_subscribe_qos, DEFAULT_SUBSCRIBE_TIMEOUT_MS);
                }
                break;
        
            case MQTT_EVT_PUBLISH:
            {
                const struct mqtt_publish_param *pub = &evt->param.publish;
                size_t received = 0u;
        
                LOG_INF("RECEIVED on topic \"%.*s\" [id: %u qos: %u] payload: %u B",
                        pub->message.topic.topic.size,
                        pub->message.topic.topic.utf8,
                        pub->message_id,
                        pub->message.topic.qos,
                        pub->message.payload.len);
        
                /* Read payload in chunks */
                while (received < pub->message.payload.len)
                {
                    size_t to_read = MIN(pub->message.payload.len - received, sizeof(app_buffer));
                    int len = mqtt_read_publish_payload_blocking(&client_ctx, app_buffer, to_read);
                    if (len <= 0)
                        break;
        
                    /* Call user callback if set */
                    if (user_rx_cb)
                    {
                        user_rx_cb((const char *)pub->message.topic.topic.utf8, app_buffer, len);
                    }
                    received += len;
                }
        
                /* Send ACK for QoS 1 */
                if (pub->message.topic.qos == MQTT_QOS_1_AT_LEAST_ONCE)
                {
                    struct mqtt_puback_param ack = {
                        .message_id = pub->message_id};
                    mqtt_publish_qos1_ack(&client_ctx, &ack);
                }
                break;
            }
        
            case MQTT_EVT_SUBACK:
                LOG_INF("MQTT SUBACK received");
                mqtt_subscribed = true;
                pending_subscribe = false;
                break;
        
            case MQTT_EVT_DISCONNECT:
                LOG_INF("MQTT Disconnected");
                mqtt_connected = false;
                mqtt_subscribed = false;
                pending_subscribe = false;
                break;
        
            default:
                break;
            }
        }
        
        static int backoff_wait(uint32_t attempt)
        {
        #ifdef CONFIG_AWS_EXPONENTIAL_BACKOFF
            uint32_t max_backoff = BACKOFF_EXP_BASE_MS << attempt;
            if (max_backoff > BACKOFF_EXP_MAX_MS)
            {
                max_backoff = BACKOFF_EXP_MAX_MS;
            }
            uint32_t delay = sys_rand32_get() % (max_backoff + 1u);
            return delay > 0 ? delay : BACKOFF_EXP_BASE_MS;
        #else
            return BACKOFF_CONST_MS;
        #endif
        }
        
        void on_mqtt_rx(const char *topic, const uint8_t *data, size_t len)
        {
            char buf[256] = {0};
            size_t copy = MIN(len, sizeof(buf) - 1);
            memcpy(buf, data, copy);
            LOG_INF("Received on %s: %.*s", topic, copy, buf);
        
            /* Execute command from any received message */
        }
        

      • partitions.yml From build (currently only primary slot)

        EMPTY_0:
          address: 0xc000
          end_address: 0x10000
          placement:
            before:
            - mcuboot_pad
          region: flash_primary
          size: 0x4000
        EMPTY_1:
          address: 0xfe000
          end_address: 0x100000
          placement:
            after:
            - nvs_storage
          region: flash_primary
          size: 0x2000
        app:
          address: 0x20000
          end_address: 0xf8000
          region: flash_primary
          size: 0xd8000
        mcuboot:
          address: 0x0
          end_address: 0xc000
          placement:
            align:
              end: 0x1000
            before:
            - mcuboot_primary
          region: flash_primary
          size: 0xc000
        mcuboot_pad:
          address: 0x10000
          end_address: 0x10200
          placement:
            align:
              start: 0x8000
            before:
            - mcuboot_primary_app
          region: flash_primary
          size: 0x200
        mcuboot_primary:
          address: 0x10000
          end_address: 0xf8000
          orig_span: &id001
          - app
          - mcuboot_pad
          - tfm
          region: flash_primary
          size: 0xe8000
          span: *id001
        mcuboot_primary_app:
          address: 0x10200
          end_address: 0xf8000
          orig_span: &id002
          - app
          - tfm
          region: flash_primary
          size: 0xe7e00
          span: *id002
        mcuboot_sram:
          address: 0x20000000
          end_address: 0x20008000
          orig_span: &id003
          - tfm_sram
          region: sram_primary
          size: 0x8000
          span: *id003
        nonsecure_storage:
          address: 0xf8000
          end_address: 0xfe000
          orig_span: &id004
          - nvs_storage
          region: flash_primary
          size: 0x6000
          span: *id004
        nrf_modem_lib_ctrl:
          address: 0x20008000
          end_address: 0x200084e8
          inside:
          - sram_nonsecure
          placement:
            after:
            - tfm_sram
            - start
          region: sram_primary
          size: 0x4e8
        nrf_modem_lib_rx:
          address: 0x2000a568
          end_address: 0x2000c568
          inside:
          - sram_nonsecure
          placement:
            after:
            - nrf_modem_lib_tx
          region: sram_primary
          size: 0x2000
        nrf_modem_lib_sram:
          address: 0x20008000
          end_address: 0x2000c568
          orig_span: &id005
          - nrf_modem_lib_ctrl
          - nrf_modem_lib_tx
          - nrf_modem_lib_rx
          region: sram_primary
          size: 0x4568
          span: *id005
        nrf_modem_lib_tx:
          address: 0x200084e8
          end_address: 0x2000a568
          inside:
          - sram_nonsecure
          placement:
            after:
            - nrf_modem_lib_ctrl
          region: sram_primary
          size: 0x2080
        nvs_storage:
          address: 0xf8000
          end_address: 0xfe000
          inside:
          - nonsecure_storage
          placement:
            align:
              start: 0x8000
            before:
            - end
          region: flash_primary
          size: 0x6000
        otp:
          address: 0xff8108
          end_address: 0xff83fc
          region: otp
          size: 0x2f4
        sram_nonsecure:
          address: 0x20008000
          end_address: 0x20040000
          orig_span: &id006
          - sram_primary
          - nrf_modem_lib_ctrl
          - nrf_modem_lib_tx
          - nrf_modem_lib_rx
          region: sram_primary
          size: 0x38000
          span: *id006
        sram_primary:
          address: 0x2000c568
          end_address: 0x20040000
          region: sram_primary
          size: 0x33a98
        sram_secure:
          address: 0x20000000
          end_address: 0x20008000
          orig_span: &id007
          - tfm_sram
          region: sram_primary
          size: 0x8000
          span: *id007
        tfm:
          address: 0x10200
          end_address: 0x20000
          inside:
          - mcuboot_primary_app
          placement:
            before:
            - app
          region: flash_primary
          size: 0xfe00
        tfm_nonsecure:
          address: 0x20000
          end_address: 0xf8000
          orig_span: &id008
          - app
          region: flash_primary
          size: 0xd8000
          span: *id008
        tfm_secure:
          address: 0x10000
          end_address: 0x20000
          orig_span: &id009
          - mcuboot_pad
          - tfm
          region: flash_primary
          size: 0x10000
          span: *id009
        tfm_sram:
          address: 0x20000000
          end_address: 0x20008000
          inside:
          - sram_secure
          placement:
            after:
            - start
          region: sram_primary
          size: 0x8000
        


        build log(currently only primary slot)

        [430/430] Linking C executable zephyr/zephyr.elf
        Memory region         Used Size  Region Size  %age Used
                   FLASH:      413072 B       864 KB     46.69%
                     RAM:      154644 B     211608 B     73.08%
                IDT_LIST:          0 GB        32 KB      0.00%



        Best Regards,
        Milan

  • if we debug option disable

    so build and run successfully and flash is optimized 
    413KB--> 333KB but it show error

    00> [00:02:31.842,224] <err> os: ***** SECURE FAULT *****
    00> [00:02:31.842,254] <err> os: Invalid entry point
    00> [00:02:31.842,254] <err> os: r0/a1: 0x00000000 r1/a2: 0x00000000 r2/a3: 0x00000000
    00> [00:02:31.842,285] <err> os: r3/a4: 0x00000000 r12/ip: 0x00000000 r14/lr: 0x00000000[0m
    00> [00:02:31.842,285] <err> os: xpsr: 0x00000000
    00> [00:02:31.842,285] <err> os: Faulting instruction address (r15/pc): 0x00000000
    00> [00:02:31.842,315] <err> os: >>> ZEPHYR FATAL ERROR 38: Unknown error on CPU 0
    00> [00:02:31.842,346] <err> os: Current thread: 0x2000dd90 (unknown)
    00> [00:02:32.065,582] <err> os: Halting system
    



    and with debug this issue not occur complately working but huge flash usage

  • Can you first get more details about the assert by enabling these configs in your prj.conf

    Once you add them and pristine build and flash, we get more details about the assert and all the thread stack usage in your board..

  • 00> [00:02:01.088,104] <dbg> target_plateform: max30001g_work_handler: after time start
    00> Thread analyze:
    00> HRM interrupt Timeout: STACK: unused 936 usage 88 / 1024 (8 %); CPU: 0 %
    00> : Total CPU cycles used: 0
    00> wifi_handler_id : STACK: unused 504 usage 1544 / 2048 (75 %); CPU: 0 %
    00> : Total CPU cycles used: 105
    00> uart_rx_id : STACK: unused 1696 usage 352 / 2048 (17 %); CPU: 0 %
    00> : Total CPU cycles used: 21
    00> thread_analyzer : STACK: unused 496 usage 528 / 1024 (51 %); CPU: 0 %
    00> : Total CPU cycles used: 755
    00> event_handler_id : STACK: unused 1640 usage 1432 / 3072 (46 %); CPU: 0 %
    00> : Total CPU cycles used: 483
    00> dev_read_data_id : STACK: unused 13720 usage 280 / 14000 (2 %); CPU: 0 %
    00> : Total CPU cycles used: 1
    00> dev_process_ecg_logging_data_id: STACK: unused 10000 usage 14000 / 24000 (58 %); CPU: 1 %
    00> : Total CPU cycles used: 54331
    00> dev_battery_level_id: STACK: unused 48 usage 3024 / 3072 (98 %); CPU: 0 %
    00> : Total CPU cycles used: 156
    00> cellular_handler_id : STACK: unused 3848 usage 248 / 4096 (6 %); CPU: 0 %
    00> : Total CPU cycles used: 3
    00> date_time_work_q : STACK: unused 872 usage 408 / 1280 (31 %); CPU: 0 %
    00> : Total CPU cycles used: 19
    00> work_q : STACK: unused 792 usage 232 / 1024 (22 %); CPU: 0 %
    00> : Total CPU cycles used: 0
    00> 0x2000f3a8 : STACK: unused 800 usage 224 / 1024 (21 %); CPU: 0 %
    00> : Total CPU cycles used: 1
    00> sysworkq : STACK: unused 1760 usage 4384 / 6144 (71 %); CPU: 0 %
    00> : Total CPU cycles used: 14612
    00> logging : STACK: unused 1404 usage 644 / 2048 (31 %); CPU: 0 %
    00> : Total CPU cycles used: 5576
    00> idle : STACK: unused 216 usage 104 / 320 (32 %); CPU: 97 %
    00> : Total CPU cycles used: 3881906
    00> ISR0 : STACK: unused 1688 usage 360 / 2048 (17 %)
    00> [00:02:01.991,088] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:01.992,919] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:02.895,690] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:02.895,751] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:03.798,675] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:03.800,567] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:04.703,338] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:04.703,399] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:05.606,323] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:05.858,703] <dbg> target_plateform: handle_ecg_event: hrm: 81 ecg: -4642
    00> [00:02:05.861,633] <inf> siwg917y: ECG Instance 3 appended, length=792 bytes (total buffer=2409/8192)
    00> [00:02:06.205,352] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:07.108,154] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:07.108,184] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:08.011,169] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:08.263,031] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:09.165,832] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:09.165,863] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:10.068,817] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:10.195,709] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:11.098,510] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:11.098,541] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:12.001,525] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:12.003,875] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5110
    00> [00:02:12.007,080] <inf> siwg917y: ECG Instance 4 appended, length=792 bytes (total buffer=3201/8192)
    00> [00:02:13.003,936] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 0
    00> [00:02:13.007,446] <inf> siwg917y: ECG Instance 5 appended, length=792 bytes (total buffer=3993/8192)
    00> [00:02:14.003,967] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 2282
    00> [00:02:14.007,781] <inf> siwg917y: ECG Instance 6 appended, length=792 bytes (total buffer=4785/8192)
    00> [00:02:15.003,997] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -1214
    00> [00:02:15.008,117] <inf> siwg917y: ECG Instance 7 appended, length=792 bytes (total buffer=5577/8192)
    00> [00:02:16.004,028] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5143
    00> [00:02:16.008,453] <inf> siwg917y: ECG Instance 8 appended, length=792 bytes (total buffer=6369/8192)
    00> [00:02:17.004,058] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5746
    00> [00:02:17.008,789] <inf> siwg917y: ECG Instance 9 appended, length=792 bytes (total buffer=7161/8192)
    00> [00:02:18.004,089] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -3613
    00> [00:02:18.009,094] <inf> siwg917y: ECG Instance 10 appended, length=792 bytes (total buffer=7953/8192)
    00> [00:02:18.009,857] <inf> siwg917y: Data to be written to flash: 7953 bytes
    00> [00:02:18.651,947] <dbg> flash_management: write_live_data: Data path: /LIVE/data09.txt
    00> [00:02:18.758,636] <dbg> flash_management: write_live_data: Data written to /LIVE/data09.txt, 7953 bytes
    00> [00:02:18.758,697] <dbg> flash_management: write_live_data: Live data updated: write_idx=10, read_idx=0, count=10
    00> [00:02:18.759,887] <inf> siwg917y: Data successfully stored to flash
    00> [00:02:18.959,625] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 9 cmd size: 7
    00> [00:02:19.004,119] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -643
    00> [00:02:19.004,608] <inf> data_module: ecg__live__data
    00> [00:02:19.006,896] <inf> data_module: Using static tx_post_cmd buffer: 8192 bytes (NO_OF_BATCH=10, one_batch_size=824)
    00> [00:02:19.007,354] <inf> siwg917y: ECG Instance 1 appended, length=824 bytes (total buffer=825/8192)
    00> [00:02:19.064,880] <dbg> target_plateform: silab_23_handler: silab_23_handler call
    00> [00:02:19.298,248] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 10 cmd size: 21
    00> [00:02:20.004,150] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 3109
    00> [00:02:20.006,774] <inf> siwg917y: ECG Instance 2 appended, length=792 bytes (total buffer=1617/8192)
    00> [00:02:20.983,032] <inf> wifi_sm: Wi-Fi module initialized successfully
    00> [00:02:21.004,180] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 3766
    00> [00:02:21.007,110] <inf> siwg917y: ECG Instance 3 appended, length=792 bytes (total buffer=2409/8192)
    00> [00:02:21.083,679] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 11 cmd size: 30
    00> [00:02:21.317,962] <dbg> fuel_gauge: fuel_gauge_update: V: 3.696, I: 0.077, T: 24.90, SoC: 45.02, TTE: 9596, TTF: nan
    00> [00:02:21.318,450] <dbg> vc_main: dev_battery_level: battery level: 45
    00> [00:02:22.004,211] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 4927
    00> [00:02:22.007,446] <inf> siwg917y: ECG Instance 4 appended, length=792 bytes (total buffer=3201/8192)
    00> [00:02:23.004,241] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 3374
    00> [00:02:23.007,781] <inf> siwg917y: ECG Instance 5 appended, length=792 bytes (total buffer=3993/8192)
    00> [00:02:24.004,272] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 1452
    00> [00:02:24.008,087] <inf> siwg917y: ECG Instance 6 appended, length=792 bytes (total buffer=4785/8192)
    00> [00:02:24.446,228] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 12 cmd size: 28
    00> [00:02:25.004,302] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -1093
    00> [00:02:25.008,422] <inf> siwg917y: ECG Instance 7 appended, length=792 bytes (total buffer=5577/8192)
    00> [00:02:25.372,985] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 13 cmd size: 10
    00> [00:02:25.683,654] <inf> siwg917y: Updated RSSI parsed: -43
    00> [00:02:25.683,685] <inf> wifi_sm: Updated RSSI: -43
    00> [00:02:25.683,685] <inf> wifi_sm: wifi _module and gpio initialized successfully
    00> [00:02:25.684,204] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 14 cmd size: 15
    00> [00:02:25.969,970] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 15 cmd size: 18
    00> [00:02:26.004,333] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -3771
    00> [00:02:26.008,758] <inf> siwg917y: ECG Instance 8 appended, length=792 bytes (total buffer=6369/8192)
    00> [00:02:27.004,364] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5264
    00> [00:02:27.009,094] <inf> siwg917y: ECG Instance 9 appended, length=792 bytes (total buffer=7161/8192)
    00> [00:02:28.004,394] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -4881
    00> [00:02:28.009,429] <inf> siwg917y: ECG Instance 10 appended, length=792 bytes (total buffer=7953/8192)
    00> [00:02:28.010,192] <inf> siwg917y: Data to be written to flash: 7953 bytes
    00> [00:02:28.557,220] <dbg> flash_management: write_live_data: Data path: /LIVE/data10.txt
    00> [00:02:28.658,630] <dbg> flash_management: write_live_data: Data written to /LIVE/data10.txt, 7953 bytes
    00> [00:02:28.658,691] <dbg> flash_management: write_live_data: Live data updated: write_idx=11, read_idx=0, count=11
    00> [00:02:28.659,484] <inf> siwg917y: Data successfully stored to flash
    00> [00:02:29.004,425] <dbg> target_plateform: handle_ecg_event: hrm: 3 ecg: -2887
    00> [00:02:29.004,913] <inf> data_module: ecg__live__data
    00> [00:02:29.007,202] <inf> data_module: Using static tx_post_cmd buffer: 8192 bytes (NO_OF_BATCH=10, one_batch_size=824)
    00> [00:02:29.007,659] <inf> siwg917y: ECG Instance 1 appended, length=824 bytes (total buffer=825/8192)
    00> [00:02:29.556,427] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 16 cmd size: 35
    00> [00:02:29.665,557] <inf> wifi_sm: WIFI CONNECTED TO MQTT....!
    00> [00:02:30.004,455] <dbg> target_plateform: handle_ecg_event: hrm: 47 ecg: -2515
    00> [00:02:30.007,080] <inf> siwg917y: ECG Instance 2 appended, length=792 bytes (total buffer=1617/8192)
    00> [00:02:31.004,486] <dbg> target_plateform: handle_ecg_event: hrm: 47 ecg: 2709
    00> [00:02:31.007,415] <inf> siwg917y: ECG Instance 3 appended, length=792 bytes (total buffer=2409/8192)
    00> [00:02:32.004,516] <dbg> target_plateform: handle_ecg_event: hrm: 30 ecg: 5054
    00> [00:02:32.007,751] <inf> siwg917y: ECG Instance 4 appended, length=792 bytes (total buffer=3201/8192)
    00> [00:02:33.004,547] <dbg> target_plateform: handle_ecg_event: hrm: 30 ecg: 5231
    00> [00:02:33.008,087] <inf> siwg917y: ECG Instance 5 appended, length=792 bytes (total buffer=3993/8192)
    00> [00:02:34.004,577] <dbg> target_plateform: handle_ecg_event: hrm: 30 ecg: 3599
    00> [00:02:34.008,422] <inf> siwg917y: ECG Instance 6 appended, length=792 bytes (total buffer=4785/8192)
    00> [00:02:35.004,608] <dbg> target_plateform: handle_ecg_event: hrm: 22 ecg: 572
    00> [00:02:35.008,758] <inf> siwg917y: ECG Instance 7 appended, length=792 bytes (total buffer=5577/8192)
    00> [00:02:36.004,669] <dbg> target_plateform: handle_ecg_event: hrm: 54 ecg: 3871
    00> [00:02:36.009,063] <inf> siwg917y: ECG Instance 8 appended, length=792 bytes (total buffer=6369/8192)
    00> [00:02:37.004,699] <dbg> target_plateform: handle_ecg_event: hrm: 54 ecg: -4141
    00> [00:02:37.009,399] <inf> siwg917y: ECG Instance 9 appended, length=792 bytes (total buffer=7161/8192)
    00> [00:02:38.004,730] <dbg> target_plateform: handle_ecg_event: hrm: 38 ecg: -4596
    00> [00:02:38.009,735] <inf> siwg917y: ECG Instance 10 appended, length=792 bytes (total buffer=7953/8192)
    00> [00:02:38.010,498] <inf> siwg917y: Data to be written to flash: 7953 bytes
    00> [00:02:38.562,500] <dbg> flash_management: write_live_data: Data path: /LIVE/data11.txt
    00> [00:02:38.663,604] <dbg> flash_management: write_live_data: Data written to /LIVE/data11.txt, 7953 bytes
    00> [00:02:38.663,635] <dbg> flash_management: write_live_data: Live data updated: write_idx=0, read_idx=0, count=12
    00> [00:02:38.664,459] <dbg> siwg917y: flash_store_live_data: read_sem signaled, live_file_count=12
    00> [00:02:38.664,459] <inf> siwg917y: Data successfully stored to flash
    00> [00:02:38.664,550] <inf> vc_main: read_post_thread: Starting
    00> [00:02:38.991,058] <dbg> flash_management: read_live_data: Reading data from path: /LIVE/data00.txt
    00> [00:02:39.004,760] <dbg> target_plateform: handle_ecg_event: hrm: 48 ecg: -4045
    00> [00:02:39.005,798] <inf> data_module: ecg__live__data
    00> [00:02:39.008,117] <inf> data_module: Using static tx_post_cmd buffer: 8192 bytes (NO_OF_BATCH=10, one_batch_size=824)
    00> [00:02:39.008,575] <inf> siwg917y: ECG Instance 1 appended, length=824 bytes (total buffer=825/8192)
    00> [00:02:39.031,585] <dbg> flash_management: read_live_data: Data read successfully from /LIVE/data00.txt, 7953 bytes
    00> [00:02:39.032,409] <inf> vc_main: Read 7953 bytes from /LIVE/data00.txt
    00> [00:02:39.042,266] <inf> siwg917y: Publishing 10 instances to topic Biomedical/data, total payload=7988 bytes
    00> [00:02:39.043,243] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 17 cmd size: 7988
    00> [00:02:39.889,343] <inf> vc_main: Data sent successfully
    00> [00:02:39.895,812] <err> os: ***** SECURE FAULT *****
    00> [00:02:39.895,812] <err> os: Invalid entry point
    00> [00:02:39.895,843] <err> os: r0/a1: 0x00000000 r1/a2: 0x00000000 r2/a3: 0x00000000
    00> [00:02:39.895,843] <err> os: r3/a4: 0x00000000 r12/ip: 0x00000000 r14/lr: 0x00000000
    00> [00:02:39.895,874] <err> os: xpsr: 0x00000000
    00> [00:02:39.895,874] <err> os: Faulting instruction address (r15/pc): 0x00000000
    00> [00:02:39.895,935] <err> os: >>> ZEPHYR FATAL ERROR 38: Unknown error on CPU 0
    00> [00:02:39.895,965] <err> os: Current thread: 0x2000df98 (dev_read_data_id)
    00> [00:02:40.149,810] <err> os: Halting system

  • Milan Pipaliya said:
    ASSERTION FAIL [!arch_is_in_isr()] @ WEST_TOPDIR/zephyr/kernel/thread.c:670
    Threads may not be created in ISRs

    You already had a lot of debug info here. You seems to be creating thread inside ISR which is not allowed.

    If you do not know which thread is created in ISR, then for testing and debugging purposes, edit z_impl_k_thread_create as below

    k_tid_t z_impl_k_thread_create(struct k_thread *new_thread,
    			      k_thread_stack_t *stack,
    			      size_t stack_size, k_thread_entry_t entry,
    			      void *p1, void *p2, void *p3,
    			      int prio, uint32_t options, k_timeout_t delay)
    {
    
        if(arch_is_in_isr())
        {
            static volatile int testing = 0;
            testing++;
        }
    
    	__ASSERT(!arch_is_in_isr(), "Threads may not be created in ISRs");    
    	z_setup_new_thread(new_thread, stack, stack_size, entry, p1, p2, p3,
    			  prio, options, NULL);
    
    	if (!K_TIMEOUT_EQ(delay, K_FOREVER)) {
    		thread_schedule_new(new_thread, delay);
    	}
    
    	return new_thread;
    }

    Set a breakpoint exactly at testing++; and start the debugger and let it run until the breakpoint is hit. You can then look at the function call stack to see the context in which the thread create was attempted to be created inside the ISR and fix your code and logic to create that thread not for the ISR.

  • we already do this

    like 

    static void max30001g_start_timeout_handler(struct k_work *work)
    {
    if (!max30001g_int_thd_flag) {
    k_thread_create(&hrm_timer_int, hrm_timer_int_stack, HRM_TIMER_INT_STACK_SIZE,
    max30001g_interrupt_init, NULL, NULL, NULL,
    HRM_TIMER_INT_PRIORITY, 0, K_MSEC(10000));
    k_thread_name_set(&hrm_timer_int, "HRM interrupt Timeout");
    max30001g_int_thd_flag = true;
    }
    }

    /* Statically define + initialize the work item */
    K_WORK_DEFINE(max30001g_start_timeout_work, max30001g_start_timeout_handler);

    static void max30001g_hrm_int_handler(const struct device *port,
    struct gpio_callback *cb,
    gpio_port_pins_t pins)
    {
    k_work_submit(&max30001g_work);
    k_work_submit(&max30001g_start_timeout_work);
    }

    static void max30001g_int_handler(const struct device *port,
    struct gpio_callback *cb,
    gpio_port_pins_t pins)
    {
    k_work_submit(&max30001g_work);
    k_work_submit(&max30001g_start_timeout_work);
    }

    still show

    > [00:02:01.088,104] <dbg> target_plateform: max30001g_work_handler: after time start
    00> Thread analyze:
    00> HRM interrupt Timeout: STACK: unused 936 usage 88 / 1024 (8 %); CPU: 0 %
    00> : Total CPU cycles used: 0
    00> wifi_handler_id : STACK: unused 504 usage 1544 / 2048 (75 %); CPU: 0 %
    00> : Total CPU cycles used: 105
    00> uart_rx_id : STACK: unused 1696 usage 352 / 2048 (17 %); CPU: 0 %
    00> : Total CPU cycles used: 21
    00> thread_analyzer : STACK: unused 496 usage 528 / 1024 (51 %); CPU: 0 %
    00> : Total CPU cycles used: 755
    00> event_handler_id : STACK: unused 1640 usage 1432 / 3072 (46 %); CPU: 0 %
    00> : Total CPU cycles used: 483
    00> dev_read_data_id : STACK: unused 13720 usage 280 / 14000 (2 %); CPU: 0 %
    00> : Total CPU cycles used: 1
    00> dev_process_ecg_logging_data_id: STACK: unused 10000 usage 14000 / 24000 (58 %); CPU: 1 %
    00> : Total CPU cycles used: 54331
    00> dev_battery_level_id: STACK: unused 48 usage 3024 / 3072 (98 %); CPU: 0 %
    00> : Total CPU cycles used: 156
    00> cellular_handler_id : STACK: unused 3848 usage 248 / 4096 (6 %); CPU: 0 %
    00> : Total CPU cycles used: 3
    00> date_time_work_q : STACK: unused 872 usage 408 / 1280 (31 %); CPU: 0 %
    00> : Total CPU cycles used: 19
    00> work_q : STACK: unused 792 usage 232 / 1024 (22 %); CPU: 0 %
    00> : Total CPU cycles used: 0
    00> 0x2000f3a8 : STACK: unused 800 usage 224 / 1024 (21 %); CPU: 0 %
    00> : Total CPU cycles used: 1
    00> sysworkq : STACK: unused 1760 usage 4384 / 6144 (71 %); CPU: 0 %
    00> : Total CPU cycles used: 14612
    00> logging : STACK: unused 1404 usage 644 / 2048 (31 %); CPU: 0 %
    00> : Total CPU cycles used: 5576
    00> idle : STACK: unused 216 usage 104 / 320 (32 %); CPU: 97 %
    00> : Total CPU cycles used: 3881906
    00> ISR0 : STACK: unused 1688 usage 360 / 2048 (17 %)
    00> [00:02:01.991,088] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:01.992,919] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:02.895,690] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:02.895,751] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:03.798,675] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:03.800,567] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:04.703,338] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:04.703,399] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:05.606,323] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:05.858,703] <dbg> target_plateform: handle_ecg_event: hrm: 81 ecg: -4642
    00> [00:02:05.861,633] <inf> siwg917y: ECG Instance 3 appended, length=792 bytes (total buffer=2409/8192)
    00> [00:02:06.205,352] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:07.108,154] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:07.108,184] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:08.011,169] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:08.263,031] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:09.165,832] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:09.165,863] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:10.068,817] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:10.195,709] <dbg> target_plateform: max30001g_work_handler: Skin Deattached: 0
    00> [00:02:11.098,510] <dbg> target_plateform: max30001g_work_handler: before time start
    00> [00:02:11.098,541] <dbg> target_plateform: max30001g_work_handler: after time start
    00> [00:02:12.001,525] <dbg> target_plateform: max30001g_work_handler: Skin Attached: 1
    00> [00:02:12.003,875] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5110
    00> [00:02:12.007,080] <inf> siwg917y: ECG Instance 4 appended, length=792 bytes (total buffer=3201/8192)
    00> [00:02:13.003,936] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 0
    00> [00:02:13.007,446] <inf> siwg917y: ECG Instance 5 appended, length=792 bytes (total buffer=3993/8192)
    00> [00:02:14.003,967] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 2282
    00> [00:02:14.007,781] <inf> siwg917y: ECG Instance 6 appended, length=792 bytes (total buffer=4785/8192)
    00> [00:02:15.003,997] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -1214
    00> [00:02:15.008,117] <inf> siwg917y: ECG Instance 7 appended, length=792 bytes (total buffer=5577/8192)
    00> [00:02:16.004,028] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5143
    00> [00:02:16.008,453] <inf> siwg917y: ECG Instance 8 appended, length=792 bytes (total buffer=6369/8192)
    00> [00:02:17.004,058] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5746
    00> [00:02:17.008,789] <inf> siwg917y: ECG Instance 9 appended, length=792 bytes (total buffer=7161/8192)
    00> [00:02:18.004,089] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -3613
    00> [00:02:18.009,094] <inf> siwg917y: ECG Instance 10 appended, length=792 bytes (total buffer=7953/8192)
    00> [00:02:18.009,857] <inf> siwg917y: Data to be written to flash: 7953 bytes
    00> [00:02:18.651,947] <dbg> flash_management: write_live_data: Data path: /LIVE/data09.txt
    00> [00:02:18.758,636] <dbg> flash_management: write_live_data: Data written to /LIVE/data09.txt, 7953 bytes
    00> [00:02:18.758,697] <dbg> flash_management: write_live_data: Live data updated: write_idx=10, read_idx=0, count=10
    00> [00:02:18.759,887] <inf> siwg917y: Data successfully stored to flash
    00> [00:02:18.959,625] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 9 cmd size: 7
    00> [00:02:19.004,119] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -643
    00> [00:02:19.004,608] <inf> data_module: ecg__live__data
    00> [00:02:19.006,896] <inf> data_module: Using static tx_post_cmd buffer: 8192 bytes (NO_OF_BATCH=10, one_batch_size=824)
    00> [00:02:19.007,354] <inf> siwg917y: ECG Instance 1 appended, length=824 bytes (total buffer=825/8192)
    00> [00:02:19.064,880] <dbg> target_plateform: silab_23_handler: silab_23_handler call
    00> [00:02:19.298,248] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 10 cmd size: 21
    00> [00:02:20.004,150] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 3109
    00> [00:02:20.006,774] <inf> siwg917y: ECG Instance 2 appended, length=792 bytes (total buffer=1617/8192)
    00> [00:02:20.983,032] <inf> wifi_sm: Wi-Fi module initialized successfully
    00> [00:02:21.004,180] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 3766
    00> [00:02:21.007,110] <inf> siwg917y: ECG Instance 3 appended, length=792 bytes (total buffer=2409/8192)
    00> [00:02:21.083,679] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 11 cmd size: 30
    00> [00:02:21.317,962] <dbg> fuel_gauge: fuel_gauge_update: V: 3.696, I: 0.077, T: 24.90, SoC: 45.02, TTE: 9596, TTF: nan
    00> [00:02:21.318,450] <dbg> vc_main: dev_battery_level: battery level: 45
    00> [00:02:22.004,211] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 4927
    00> [00:02:22.007,446] <inf> siwg917y: ECG Instance 4 appended, length=792 bytes (total buffer=3201/8192)
    00> [00:02:23.004,241] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 3374
    00> [00:02:23.007,781] <inf> siwg917y: ECG Instance 5 appended, length=792 bytes (total buffer=3993/8192)
    00> [00:02:24.004,272] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: 1452
    00> [00:02:24.008,087] <inf> siwg917y: ECG Instance 6 appended, length=792 bytes (total buffer=4785/8192)
    00> [00:02:24.446,228] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 12 cmd size: 28
    00> [00:02:25.004,302] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -1093
    00> [00:02:25.008,422] <inf> siwg917y: ECG Instance 7 appended, length=792 bytes (total buffer=5577/8192)
    00> [00:02:25.372,985] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 13 cmd size: 10
    00> [00:02:25.683,654] <inf> siwg917y: Updated RSSI parsed: -43
    00> [00:02:25.683,685] <inf> wifi_sm: Updated RSSI: -43
    00> [00:02:25.683,685] <inf> wifi_sm: wifi _module and gpio initialized successfully
    00> [00:02:25.684,204] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 14 cmd size: 15
    00> [00:02:25.969,970] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 15 cmd size: 18
    00> [00:02:26.004,333] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -3771
    00> [00:02:26.008,758] <inf> siwg917y: ECG Instance 8 appended, length=792 bytes (total buffer=6369/8192)
    00> [00:02:27.004,364] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -5264
    00> [00:02:27.009,094] <inf> siwg917y: ECG Instance 9 appended, length=792 bytes (total buffer=7161/8192)
    00> [00:02:28.004,394] <dbg> target_plateform: handle_ecg_event: hrm: 93 ecg: -4881
    00> [00:02:28.009,429] <inf> siwg917y: ECG Instance 10 appended, length=792 bytes (total buffer=7953/8192)
    00> [00:02:28.010,192] <inf> siwg917y: Data to be written to flash: 7953 bytes
    00> [00:02:28.557,220] <dbg> flash_management: write_live_data: Data path: /LIVE/data10.txt
    00> [00:02:28.658,630] <dbg> flash_management: write_live_data: Data written to /LIVE/data10.txt, 7953 bytes
    00> [00:02:28.658,691] <dbg> flash_management: write_live_data: Live data updated: write_idx=11, read_idx=0, count=11
    00> [00:02:28.659,484] <inf> siwg917y: Data successfully stored to flash
    00> [00:02:29.004,425] <dbg> target_plateform: handle_ecg_event: hrm: 3 ecg: -2887
    00> [00:02:29.004,913] <inf> data_module: ecg__live__data
    00> [00:02:29.007,202] <inf> data_module: Using static tx_post_cmd buffer: 8192 bytes (NO_OF_BATCH=10, one_batch_size=824)
    00> [00:02:29.007,659] <inf> siwg917y: ECG Instance 1 appended, length=824 bytes (total buffer=825/8192)
    00> [00:02:29.556,427] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 16 cmd size: 35
    00> [00:02:29.665,557] <inf> wifi_sm: WIFI CONNECTED TO MQTT....!
    00> [00:02:30.004,455] <dbg> target_plateform: handle_ecg_event: hrm: 47 ecg: -2515
    00> [00:02:30.007,080] <inf> siwg917y: ECG Instance 2 appended, length=792 bytes (total buffer=1617/8192)
    00> [00:02:31.004,486] <dbg> target_plateform: handle_ecg_event: hrm: 47 ecg: 2709
    00> [00:02:31.007,415] <inf> siwg917y: ECG Instance 3 appended, length=792 bytes (total buffer=2409/8192)
    00> [00:02:32.004,516] <dbg> target_plateform: handle_ecg_event: hrm: 30 ecg: 5054
    00> [00:02:32.007,751] <inf> siwg917y: ECG Instance 4 appended, length=792 bytes (total buffer=3201/8192)
    00> [00:02:33.004,547] <dbg> target_plateform: handle_ecg_event: hrm: 30 ecg: 5231
    00> [00:02:33.008,087] <inf> siwg917y: ECG Instance 5 appended, length=792 bytes (total buffer=3993/8192)
    00> [00:02:34.004,577] <dbg> target_plateform: handle_ecg_event: hrm: 30 ecg: 3599
    00> [00:02:34.008,422] <inf> siwg917y: ECG Instance 6 appended, length=792 bytes (total buffer=4785/8192)
    00> [00:02:35.004,608] <dbg> target_plateform: handle_ecg_event: hrm: 22 ecg: 572
    00> [00:02:35.008,758] <inf> siwg917y: ECG Instance 7 appended, length=792 bytes (total buffer=5577/8192)
    00> [00:02:36.004,669] <dbg> target_plateform: handle_ecg_event: hrm: 54 ecg: 3871
    00> [00:02:36.009,063] <inf> siwg917y: ECG Instance 8 appended, length=792 bytes (total buffer=6369/8192)
    00> [00:02:37.004,699] <dbg> target_plateform: handle_ecg_event: hrm: 54 ecg: -4141
    00> [00:02:37.009,399] <inf> siwg917y: ECG Instance 9 appended, length=792 bytes (total buffer=7161/8192)
    00> [00:02:38.004,730] <dbg> target_plateform: handle_ecg_event: hrm: 38 ecg: -4596
    00> [00:02:38.009,735] <inf> siwg917y: ECG Instance 10 appended, length=792 bytes (total buffer=7953/8192)
    00> [00:02:38.010,498] <inf> siwg917y: Data to be written to flash: 7953 bytes
    00> [00:02:38.562,500] <dbg> flash_management: write_live_data: Data path: /LIVE/data11.txt
    00> [00:02:38.663,604] <dbg> flash_management: write_live_data: Data written to /LIVE/data11.txt, 7953 bytes
    00> [00:02:38.663,635] <dbg> flash_management: write_live_data: Live data updated: write_idx=0, read_idx=0, count=12
    00> [00:02:38.664,459] <dbg> siwg917y: flash_store_live_data: read_sem signaled, live_file_count=12
    00> [00:02:38.664,459] <inf> siwg917y: Data successfully stored to flash
    00> [00:02:38.664,550] <inf> vc_main: read_post_thread: Starting
    00> [00:02:38.991,058] <dbg> flash_management: read_live_data: Reading data from path: /LIVE/data00.txt
    00> [00:02:39.004,760] <dbg> target_plateform: handle_ecg_event: hrm: 48 ecg: -4045
    00> [00:02:39.005,798] <inf> data_module: ecg__live__data
    00> [00:02:39.008,117] <inf> data_module: Using static tx_post_cmd buffer: 8192 bytes (NO_OF_BATCH=10, one_batch_size=824)
    00> [00:02:39.008,575] <inf> siwg917y: ECG Instance 1 appended, length=824 bytes (total buffer=825/8192)
    00> [00:02:39.031,585] <dbg> flash_management: read_live_data: Data read successfully from /LIVE/data00.txt, 7953 bytes
    00> [00:02:39.032,409] <inf> vc_main: Read 7953 bytes from /LIVE/data00.txt
    00> [00:02:39.042,266] <inf> siwg917y: Publishing 10 instances to topic Biomedical/data, total payload=7988 bytes
    00> [00:02:39.043,243] <dbg> siwg917y_uart: SIWG917Y_cmd_send: count : 17 cmd size: 7988
    00> [00:02:39.889,343] <inf> vc_main: Data sent successfully
    00> [00:02:39.895,812] <err> os: ***** SECURE FAULT *****
    00> [00:02:39.895,812] <err> os: Invalid entry point
    00> [00:02:39.895,843] <err> os: r0/a1: 0x00000000 r1/a2: 0x00000000 r2/a3: 0x00000000
    00> [00:02:39.895,843] <err> os: r3/a4: 0x00000000 r12/ip: 0x00000000 r14/lr: 0x00000000
    00> [00:02:39.895,874] <err> os: xpsr: 0x00000000
    00> [00:02:39.895,874] <err> os: Faulting instruction address (r15/pc): 0x00000000
    00> [00:02:39.895,935] <err> os: >>> ZEPHYR FATAL ERROR 38: Unknown error on CPU 0
    00> [00:02:39.895,965] <err> os: Current thread: 0x2000df98 (dev_read_data_id)
    00> [00:02:40.149,810] <err> os: Halting system
Related