This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Time for configuring node increases after deleting from mesh network

Hi guys,

I have the same issue as coca1989 had 2 years ago. Did anybody found a solution?

I am using health model and simple message model. Client and Provisioner are running on one device (demoboard). I can provision and configure upto 5 server nodes (dongles). I get health events from each connected server (all 10 seconds) and I can send and receive small messages on all server nodes.

Now, I would like to remove nodes from the mesh network and reconnect them (reprovisioning and reconfiguration). This are the steps that I am doing:

  1. config_client_server_bind() and config_client_server_set() to the server node I would like to remove from network
  2. config_client_node_reset()
  3. the server gets the node reset event (CONFIG_SERVER_EVT_NODE_RESET) from client and performs node_reset() with:  mesh_stack_config_clear() and mesh_stack_device_reset()
  4. the server responds to the client with CONFIG_CLIENT_EVENT_TYPE_CANCELLED and I do dsm_devkey_delete()

After removing the server node, I can reprovision and reconfigure the node successfully (getting health events and send/receive messages). But the configuration takes longer then the first time. Repeating this process (removing node and reconnecting) increases the configuration time each time.

Here is a time table:

First configuration: 2-3 seconds
Second configuration (after removing node from mesh): 10-11 seconds
Third configuration (after removing node from mesh):20-30 seconds
Fourth configuration (after removing node from mesh): 45 -50 seconds
Fifth configuration (after removing node from mesh): >80 seconds

This is reproduceable. Rebooting the client/provisioner device after removing a server node reduces the configuration time back to 2-3 seconds, but I do not get health events and no messages.

During reconfiguration (after removing the server from network) I am getting SAR mesh events on the server node. At the first configuration (fresh device) I dont have this SAR events.

I guess I have to delete more on client side? Maybe the simple message or health is still active on the last address handles?

  • I have asked our developers about this, I will update you when I have something.

  • Hi Mttrinh,

    Above I wrote that rebooting the client/provisioner reduces the configuration time back to 2-3 seconds, but health and simple message models wont work.


    Now, I managed to run correctly the health and simple message model after reboot. The problem was a wrong appkey.

    After rebooting the appkey was not loaded from flash, it stayed zero. I thought dsm_appkey_get_all() will handle it, but I was wrong. Now I am using dsm_tx_secmat_get() for loading the appkey from flash. With the correct appkey all mesh models are working fine after. I am still wondering why there is not dsm_appkey_get() function, this would make my life easier.

    The increasing configuration time of removed server nodes still exists if I do not reboot the client/provisioner. I guess have have to refresh something in the mesh stack on client site.

  • Hi,

    Great that the health and simple message models issue worked out. A question from our developer:

    Are you using the same Unicast address for the node every time they reprovision?

    If that is the case then what is happening here is that the provisioner's replay list filters out the Node's incoming responses until your sequence number is higher than the last known sequence number. That is why you see increasing time to fully configure the node, as after reprovisioning the node's sequence numbers are reset to zero.

    The only reasonable way to work around this issue is to not use the same node address while reprovisioning the node. Otherwise, you will have to reset the provisioner or clear the replay list with internal API (both options carry a risk of replay attacks so should be chosen wisely).

  • Hi,

    thanks for fast response. 

    Yes, I do reuse the unicast address, because I thought I can reuse them after deleting the node from network. I plan to use max. 50 server nodes, starting with address 500..

    What is the max. buffer size of this relay list?
    I guess, there will be a limit and what to do if this limit is reached?

    Do I undestand the work around correctly?:

    1. For provisioning I use an increasing unicast address (1 to limit) at start_provisioning()/nrf_mesh_prov_provision() in  nrf_mesh_prov_provisioning_data_t.

    2. On NRF_MESH_PROV_EVT_COMPLETE I dont use the unicast address, I do set my node address (500-549) with dsm_address_publish_add()

    3. Procceed with node configuration with node address (500-549)

    Is this correct?

    Best reagards,

    Jeff

Related