PM_EVT_STORAGE_FULL event spammed

goldwake gravatar image

asked 2017-11-17 03:26:23 +0100

updated 2017-11-20 23:35:35 +0100

I am currently doing some stress testing to ensure that I am correctly removing peers when the peer manager storage is full.

In this regard, my device is acting as a peripheral, being connected to by Android 7 phones. I have successfully got my device to add and delete peers as required however, when I turn my device off and have the phones automatically reconnect, I see that I get a lot of PM_EVT_STORAGE_FULL events, followed by a bus Fault. What could cause PM_EVT_STORAGE_FULL to be repeatedly received?

My peer manager code and log (from the restart) is below. I am Running on an NRF52 using SDK14.0 and S332 v 5.0. I've removed the logged data from print_peer_data and print_ltk

[main] Finished setup
0> CONNECTED conn handle 0 
0> CONNECTED Periph 1 of 5
0> PM_EVT_BONDED_PEER_CONNECTED 8
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> peer 2
0> PM_EVT_LOCAL_DB_CACHE_APPLIED
0> PM_EVT_CONN_SEC_START
0> PM_EVT_CONN_SEC_SUCCEEDED 8 2 procedure 0
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> peer 2
0> CONNECTED conn handle 1 
0> CONNECTED Periph 2 of 5
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> CONNECTED conn handle 2 
0> CONNECTED Periph 3 of 5
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> CONNECTED conn handle 3 
0> CONNECTED Periph 4 of 5
0> PM_EVT_BONDED_PEER_CONNECTED 8
0> peer 9
0> PM_EVT_LOCAL_DB_CACHE_APPLIED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_CONN_SEC_START
0> PM_EVT_CONN_SEC_SUCCEEDED 8 9 procedure 0
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> peer 9 
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> ble_scanning_start
0> scan_start
0> BLE_GAP_EVT_DISCONNECTED
0> Peripheral Disconnected 3
0> scan_start
0> CONNECTED conn handle 4 
0> CONNECTED Periph 4 of 5
0> PM_EVT_BONDED_PEER_CONNECTED 8
0> peer 4
0> PM_EVT_LOCAL_DB_CACHE_APPLIED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_CONN_SEC_START
0> PM_EVT_CONN_SEC_SUCCEEDED 8 4 procedure 0
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> peer 4
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> BLE_GAP_EVT_DISCONNECTED
0> Peripheral Disconnected 3
0> scan_start
0> CONNECTED conn handle 2 
0> CONNECTED Periph 4 of 5
0> PM_EVT_BONDED_PEER_CONNECTED 8
0> peer 1
0> PM_EVT_LOCAL_DB_CACHE_APPLIED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_CONN_SEC_CONFIG_REQ
0> PM_EVT_CONN_SEC_START
0> PM_EVT_CONN_SEC_SUCCEEDED 8 1 procedure 1
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> peer 1
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> BLE_GAP_EVT_DISCONNECTED
0> Peripheral Disconnected 3
0> scan_start
0> CONNECTED conn handle 4 
0> CONNECTED Periph 4 of 5
0> PM_EVT_CONN_SEC_START
0> PM_EVT_CONN_SEC_SUCCEEDED 9 5 procedure 1
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_STORAGE_FULL
0> PM_EVT_PEER_DATA_UPDATE_SUCCEEDED
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> PM_EVT_STORAGE_FULL
0> BusFault

static uint32_t rank_peer_highest(pm_peer_id_t peer_id)
{
    uint32_t err_code = pm_peer_rank_highest(peer_id);
    //pm_peer_rank_highest can return NRF_ERROR_STORAGE_FULL so, handle it https://devzone.nordicsemi.com/question/160450/pm_peer_rank_highest-return-values/
    if (err_code == NRF_ERROR_STORAGE_FULL) 
    {
        _peer_to_rank_highest = peer_id;
    }
    else if (err_code != NRF_ERROR_BUSY)
    {
        APP_ERROR_CHECK(err_code);
    }
    return err_code;
}

#if defined(NRF5_SDK13) || defined(NRF5_SDK14)
static void print_ltk(ble_gap_enc_key_t * ltk)
{
    TRACE("ltk.enc_info ltk ");
    for(int i = 0; i < BLE_GAP_SEC_KEY_LEN; i++)
    {
        TRACE("%x ", ltk->enc_info.ltk[i]);
    }
    TRACE("\r\n");
    TRACE("lesc %d auth %d len %d\r\n", ltk->enc_info.lesc, ltk->enc_info.auth, ltk->enc_info.ltk_len);

    TRACE("ltk.master_id rand");
    for(int i = 0; i < BLE_GAP_SEC_RAND_LEN; i++)
    {
        TRACE("%x ", ltk->master_id.rand[i]);
    }
    TRACE("\r\n");
    TRACE("ediv  %d\r\n", ltk->master_id.ediv);
}

static void print_peer_data(pm_peer_id_t peer)
{
    pm_peer_id_t current_peer_id = pm_next_peer_id_get(PM_PEER_ID_INVALID);
    pm_peer_data_bonding_t bond_data;
    while (current_peer_id != PM_PEER_ID_INVALID)
    {
        // Do something with current_peer_id.
        current_peer_id = pm_next_peer_id_get(current_peer_id);
        if(current_peer_id != PM_PEER_ID_INVALID && peer == current_peer_id)
        {
            ret_code_t e = pm_peer_data_bonding_load(current_peer_id, &bond_data);
            if(e == NRF_SUCCESS)
            {

                TRACE("peer %d\r\n", current_peer_id);
                TRACE("own role %d \r\n", bond_data.own_role);
                TRACE("peer_ble_id.id_info.irk ");
                for(int i = 0; i < BLE_GAP_SEC_KEY_LEN; i++)
                {
                    TRACE("%x ", bond_data.peer_ble_id.id_info.irk[i]);
                }
                TRACE("\r\n");
                TRACE("peer_ble_id.id_addr_info addr_type %d  addr_id_peer %d\r\n", bond_data.peer_ble_id.id_addr_info.addr_type, bond_data.peer_ble_id.id_addr_info.addr_id_peer);
                TRACE("peer_ble_id.id_addr_info.addr ");
                for(int i = 0; i < BLE_GAP_ADDR_LEN; i++)
                {
                    TRACE("%x ", bond_data.peer_ble_id.id_addr_info.addr[i]);
                }
                TRACE("\r\n");
                TRACE("own_ltk\r\n");
                print_ltk(&bond_data.own_ltk);
                TRACE("peer_ltk\r\n");
                print_ltk(&bond_data.peer_ltk);
                TRACE("\r\n");
                TRACE("\r\n");

            }
        }
    }
}
#endif
static bool delete_oldest_unused_peer()
{
    pm_peer_id_t highest_ranked_peer;
    pm_peer_id_t lowest_ranked_peer;
    uint32_t highest_rank;
    uint32_t lowest_rank;

    ret_code_t err_code = pm_peer_ranks_get(&highest_ranked_peer, &highest_rank, &lowest_ranked_peer, &lowest_rank);
    if(err_code == NRF_SUCCESS)
    {
        uint16_t conn_handle;
        err_code = pm_conn_handle_get(lowest_ranked_peer, &conn_handle);
        if( ble_conn_state_status(conn_handle) == BLE_CONN_STATUS_DISCONNECTED)
            err_code = pm_peer_delete(lowest_ranked_peer);

    }
    APP_ERROR_CHECK(err_code);
    return false;
}

static void pm_evt_handler(pm_evt_t const * p_evt)
{
    ret_code_t err_code;
    // example shown here http://infocenter.nordicsemi.com/index.jsp?topic=%2Fcom.nordic.infocenter.sdk5.v11.0.0%2Flib_pm_usage.html&anchor=lib_pm_usage_events 
    switch(p_evt->evt_id)
    {
        case PM_EVT_STORAGE_FULL:
        {
            TRACE("PM_EVT_STORAGE_FULL\r\n");
            delete_oldest_unused_peer();
            // Run garbage collection on the flash.
            err_code = fds_gc();
            if (err_code == FDS_ERR_BUSY || err_code == FDS_ERR_NO_SPACE_IN_QUEUES)
            {
                // Retry.
            }
            else
            {
                APP_ERROR_CHECK(err_code);
            }
            if( _peer_to_rank_highest != PM_PEER_ID_INVALID)
            {
                pm_peer_rank_highest(_peer_to_rank_highest);
                _peer_to_rank_highest = PM_PEER_ID_INVALID;
            }
        }break;//PM_EVT_STORAGE_FULL
        case PM_EVT_BONDED_PEER_CONNECTED:
        {
            TRACE("PM_EVT_BONDED_PEER_CONNECTED %d\r\n", pm_peer_count());
            rank_peer_highest(p_evt->peer_id);
            #if defined(NRF5_SDK13) || defined(NRF5_SDK14)
            print_peer_data(p_evt->peer_id);
            #endif
        } break; // PM_EVT_BONDED_PEER_CONNECTED
        case PM_EVT_CONN_SEC_START:
            TRACE("PM_EVT_CONN_SEC_START\r\n");
            break;
        case PM_EVT_CONN_SEC_SUCCEEDED:
        {
            TRACE("PM_EVT_CONN_SEC_SUCCEEDED %d %d procedure %d\r\n", pm_peer_count(), p_evt->peer_id, p_evt->params.conn_sec_succeeded.procedure);
            rank_peer_highest(p_evt->peer_id);
            #if defined(NRF5_SDK13) || defined(NRF5_SDK14)
            print_peer_data(p_evt->peer_id);
            #endif
        } break; // PM_EVT_CONN_SEC_SUCCEEDED
        case PM_EVT_CONN_SEC_FAILED:
        {
            TRACE("PM_EVT_CONN_SEC_FAILED ");
            /** In some cases, when securing fails, it can be restarted directly. Sometimes it can
             *  be restarted, but only after changing some Security Parameters. Sometimes, it cannot
             *  be restarted until the link is disconnected and reconnected. Sometimes it is
             *  impossible, to secure the link, or the peer device does not support it. How to
             *  handle this error is highly application dependent. */
            switch (p_evt->params.conn_sec_failed.error)
            {
                case PM_CONN_SEC_ERROR_PIN_OR_KEY_MISSING:
                {
                    TRACE("PM_CONN_SEC_ERROR_PIN_OR_KEY_MISSING\r\n");
                    // Rebond if one party has lost its keys.
                    err_code = pm_conn_secure(p_evt->conn_handle, true);
                    if (err_code != NRF_ERROR_INVALID_STATE)
                    {
                        APP_ERROR_CHECK(err_code);
                    }
                } break; // PM_CONN_SEC_ERROR_PIN_OR_KEY_MISSING
                case(PM_CONN_SEC_ERROR_MIC_FAILURE):
                {
                    TRACE("PM_CONN_SEC_ERROR_MIC_FAILURE\r\n");
                    ret_code_t err_code = pm_peer_delete(p_evt->peer_id);
                    if (err_code != NRF_ERROR_INVALID_STATE)
                    {
                        APP_ERROR_CHECK(err_code);
                    }
                } break;
                case(PM_CONN_SEC_ERROR_DISCONNECT):
                {
                    TRACE("PM_CONN_SEC_ERROR_DISCONNECT\r\n");
                } break;
                case(PM_CONN_SEC_ERROR_SMP_TIMEOUT):
                {
                    TRACE("PM_CONN_SEC_ERROR_SMP_TIMEOUT\r\n");
                } break;
                default:
                    break;
            }
        } break; // PM_EVT_CONN_SEC_FAILED
        case PM_EVT_CONN_SEC_CONFIG_REQ:
        {
            TRACE("PM_EVT_CONN_SEC_CONFIG_REQ\r\n");
            // Don't Reject pairing request from an already bonded peer.
            pm_conn_sec_config_t conn_sec_config = {.allow_repairing = true};
            pm_conn_sec_config_reply(p_evt->conn_handle, &conn_sec_config);
        } break; // PM_EVT_CONN_SEC_CONFIG_REQ
        case PM_EVT_ERROR_UNEXPECTED:
            TRACE("PM_EVT_ERROR_UNEXPECTED\r\n");
            // Assert.
            APP_ERROR_CHECK(p_evt->params.error_unexpected.error);
            break; // PM_EVT_ERROR_UNEXPECTED
        case PM_EVT_PEER_DATA_UPDATE_SUCCEEDED:
            TRACE("PM_EVT_PEER_DATA_UPDATE_SUCCEEDED\r\n");
            break;
        case PM_EVT_PEER_DATA_UPDATE_FAILED:
            TRACE("PM_EVT_PEER_DATA_UPDATE_FAILED\r\n");
            // Assert.
            APP_ERROR_CHECK_BOOL(false);
            break; // PM_EVT_PEER_DATA_UPDATE_FAILED
        case PM_EVT_PEER_DELETE_SUCCEEDED:
            TRACE("PM_EVT_PEER_DELETE_SUCCEEDED\r\n");
            err_code = fds_gc();
            if (err_code == FDS_ERR_BUSY || err_code == FDS_ERR_NO_SPACE_IN_QUEUES)
            {
                // Retry.
            }
            else
            {
                APP_ERROR_CHECK(err_code);
            }
            break;
        case PM_EVT_PEER_DELETE_FAILED:
            TRACE("PM_EVT_PEER_DELETE_FAILED\r\n");
            // Assert.
            APP_ERROR_CHECK(p_evt->params.peer_delete_failed.error);
            break; // PM_EVT_PEER_DELETE_FAILED
        case PM_EVT_PEERS_DELETE_SUCCEEDED:
            TRACE("PM_EVT_PEERS_DELETE_SUCCEEDED\r\n");
            err_code = fds_gc();
            if (err_code == FDS_ERR_BUSY || err_code == FDS_ERR_NO_SPACE_IN_QUEUES)
            {
                // Retry.
            }
            else
            {
                APP_ERROR_CHECK(err_code);
            }
            //advertising_start();
            break;
        case PM_EVT_PEERS_DELETE_FAILED:
            TRACE("PM_EVT_PEERS_DELETE_FAILED\r\n");
            // Assert.
            APP_ERROR_CHECK(p_evt->params.peers_delete_failed_evt.error);
            break; // PM_EVT_PEERS_DELETE_FAILED
        case PM_EVT_LOCAL_DB_CACHE_APPLIED:
            TRACE("PM_EVT_LOCAL_DB_CACHE_APPLIED\r\n");
            break;
        case PM_EVT_LOCAL_DB_CACHE_APPLY_FAILED:
            TRACE("PM_EVT_LOCAL_DB_CACHE_APPLY_FAILED\r\n");
            // The local database has likely changed, send service changed indications.
            pm_local_database_has_changed();
            break; // PM_EVT_LOCAL_DB_CACHE_APPLY_FAILED
        case PM_EVT_SERVICE_CHANGED_IND_SENT:
            TRACE("PM_EVT_SERVICE_CHANGED_IND_SENT\r\n");
            break;
        case PM_EVT_SERVICE_CHANGED_IND_CONFIRMED:
            TRACE("PM_EVT_SERVICE_CHANGED_IND_CONFIRMED\r\n");
            break;
        default:
            TRACE("PM DEFAULT %d\r\n", p_evt->evt_id);
            break;
    }
}

EDIT 20 Nov 17 OK, so what I think I am seeing is that when I connect+ bond to my device, it is allocating space for the peer, and I get a SM_EVT_ERROR_NO_MEM, which leads to a PM_EVT_STORAGE_FULL. This calls fds_gc and once that finishes, the bonding is automatically attempted again and my loop repeats.

I have noticed that in queue_process, gc_execute retruns 0x10 which doesn't seem to be defined anywhere.

I have also noticed that somehow, pm_peer_count() returns 0 but I still get PM_EVT_STORAGE_FULL events

EDIT 21 Nov 17 Fds_gc only returns 0

fds_stat returns the following

 0> open_records 0
 0> valid_records 19
 0> dirty_records 119
 0> words_reserved 0
 0> words_used 14529
 0> largest_contig 54072
 0> freeable_words 2079

So, there is definitely space to free but, for some reason, fds_gc doesn't seem to be freeing it.

0x10 corresponds to NRF_ERROR_INVALID_ADDR. I get this error in nrf_storage_write and its either due to the address not being within bounds. The length is 0xC08C which pushes me past the end address. The call stack at this point is

nrf_fstorage_sys_evt_handler
event_send 
fs_event_handler
queue_process 
gc_execute
gc_record_find_next 
gc_record_copy
nrf_storage_write
edit retag flag offensive close delete report spam

Comments

What does fds_gc() return? FDS_ERR_NO_SPACE_IN_QUEUES? Strange that gc_execute() returns 0x10. I'm not sure where that comes from. Maybe you could try to debug some more inside gc_execute()? Could you also try to do fds_stat() to see what it spits out?

Petter Myhre ( 2017-11-20 17:17:10 +0100 )editconvert to answer

Strange. Would it be possible for you to share your complete project? So I can try to reproduce?

Petter Myhre ( 2017-11-21 13:52:51 +0100 )editconvert to answer

Thanks Petter, I have sent you a private message

goldwake ( 2017-11-22 04:46:45 +0100 )editconvert to answer