This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Mesh Models in a gateway

Hey,

I'm developing a gateway using NRF52832 & ESP12 (UART Communication). Write now what i do is I have a one model (in the client example) for each type of devices in the mesh network (1 RGB Light + 1 Switch + 1 Roller Door + etc..). In this way the ESP sends the element address of the RGB light that i want to switch state, and that received address is set as the publication address for the given type of model handle(in this case, RGB model handle) and the data is published to the mesh. In this way for each of the message that is received from the cloud, the specified address is set to the given model and then the data is published to the desired node. 

This works with no issues right now. But technically is it the most efficient way of publishing data. Should i have one model handle per unicast address in the node or one handle per node or the way I have done is alright?

Parents
  • Hi,

    While one client model of each type would technically work, it may not be a good option in the long run. The reason is that there will be a couple of flash writes every time you reconfigure the publish address, which means you run the risk of wearing out the flash. Because of that it might be a better idea to use more clients. I suggest that you do some calculations based on what will be the usage scenario, and figure out how many reconfigurations you would expect over time. If you reach tens of thousands during the product life time then you probably want to do some changes.

    In general, you should organize your network so that several server models subscribe to the same group address. Then you control the group, and not individual servers. E.g. all the lighting fixtures in a room. That also makes it easier to replace light bulbs, as you only need to reconfigure the new light bulb to subscribe to the group address.

    Regarding publishing data, there is a limit for any mesh node on the network of publishing 100 messages during a moving 10 second time frame. I.e. peak throughput will be ten messages per second. This means using group addresses (controling multiple nodes together) is better than individually controlling multiple nodes.

    Getting messages from multiple nodes should not be an issue, other than the general issue of packet collisions if the network gets congested. While there is a limit on throughput originating from a node, there is no limit on relaying or receiving packets.

    Regards,
    Terje

  • Grouping is the option for the customer right. We have no control over it. The customer can group the devices as per his/her needs. (say all the lights in the room as a group, he/she can group them from the mobile application that we are developing which in that case we have no control over the group, its totally the customers requirement.)

    In that case what you recommend is to have several elements having the same model in the gateway publish data to the nodes. (like round robin)?

Reply
  • Grouping is the option for the customer right. We have no control over it. The customer can group the devices as per his/her needs. (say all the lights in the room as a group, he/she can group them from the mobile application that we are developing which in that case we have no control over the group, its totally the customers requirement.)

    In that case what you recommend is to have several elements having the same model in the gateway publish data to the nodes. (like round robin)?

Children
  • Hi,

    Yes, in that case that would be the best option. (And of course if a model is already configured with the correct publication address then that one is reused instead of reconfiguring the next one round-robin.)

    Regards,
    Terje

  • Hi,

    If you take the function handle_config_model_publication_set in config_server.c. There is a function access_flash_config_store. In the documentation it is said that this is line of code where the access layer information gets stored in the flash. So if i comment out this, since i don't actually want to save the publication address (before publishing data i set the address of the model always), I won't have to worry about the flash memory issue as you have said in the earlier reply (~which means you run the risk of wearing out the flash). WiFi module always sent the state and respective element address to which the publication address of the NRF node in the gateway is updated before publishing data. So this would solve the issue right?

    And you said earlier to use like 10 models and do round robin when publishing data. (is it just because of the wearing out flash issue?). If the above flow is okay (comment out the access_flash_config_store()), then is it fine to use one model. Will there be any performance hits compared to using 10 models?


    Thanks,
    Asiri 

  • Hi,

    The suggestion to have a pool of several models is intended to be a caching mechanism. I.e. that if you use the same few models often then you don't need to configure those addresses all the time. I.e. preferably you use a heuristic that minimizes the need for publish address reconfigurations.

    While you do have control over flash storage done by the access module, you do not have control over flash storage done by the device state manager (dsm) module. If you do not update the data stored for the access module (but dsm is updated behind the scenes) then you are likely to end up in a bad state after a reset. (Akin to a dangling pointer in c.)

    You may recover from dsm/access inconsistencies by removing the gateway node from the network and reprovision it again on every reset, or at least when you get the mesh assert because of dsm/access being out of sync, but that again would put more strain on the mesh network, involves the provisioner, and leads to more flash erase cycles... This is the reason why it is generally better to have more models (to avoid reconfigurations leading to more flash usage.)

    Regards,
    Terje

  • Hey,

    For the runtime environment, only the volatile memory is used right. (Say setting the publication address of an element, and then publishing it). This depends on the volatile memory or does it depend on the flash memory aswell. Isn't the flash memory used to get data back to the voltaile memory after the reset, or does it have other purposes throughout the code?

    You see for my application, I don't necessarily require saving the publication address of the module of the flash, because each time a message gets published, i could set the publication address of the element. Considering this fact is there a workaround other than having more models? (Even having more models, the cycles of the gateway would be limited, this is what I'm trying to clarify from my end).

    Thanks,
    Asiri

  • Hi,

    Any configuration of the models, including publish addresses and subscription addresses, are supposed to survive a reset. For that reason, all such configuration values are to be stored in non-volatile memory.

    In our bt mesh stack, the responsibility of storing configuration data to non-volatile memory is partly covered by the stack and partly covered by the model implementation. While the DSM part is done in the stack, the access layer part is done in the model implementation. As such, you only control the access part of it.

    Since the stack is provided as source code you technically have access to disabling flash writes in DSM, but do note that since that constitutes a change of the stack you cannot use our qualification ID for the stack any more if you do so.

    You are not the first customer to ask for this (or similar) scenario, and we are aware of the use case. Hopefully we can do something to better suit this use case in the future, but for the time being I am afraid using several models is the way to go. (Unless you want to qualify the stack yourself, at which point modifying the behavior of DSM becomes a valid option.)

    Regards,
    Terje

Related