This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Mesh Models in a gateway

Hey,

I'm developing a gateway using NRF52832 & ESP12 (UART Communication). Write now what i do is I have a one model (in the client example) for each type of devices in the mesh network (1 RGB Light + 1 Switch + 1 Roller Door + etc..). In this way the ESP sends the element address of the RGB light that i want to switch state, and that received address is set as the publication address for the given type of model handle(in this case, RGB model handle) and the data is published to the mesh. In this way for each of the message that is received from the cloud, the specified address is set to the given model and then the data is published to the desired node. 

This works with no issues right now. But technically is it the most efficient way of publishing data. Should i have one model handle per unicast address in the node or one handle per node or the way I have done is alright?

Parents
  • Hi,

    While one client model of each type would technically work, it may not be a good option in the long run. The reason is that there will be a couple of flash writes every time you reconfigure the publish address, which means you run the risk of wearing out the flash. Because of that it might be a better idea to use more clients. I suggest that you do some calculations based on what will be the usage scenario, and figure out how many reconfigurations you would expect over time. If you reach tens of thousands during the product life time then you probably want to do some changes.

    In general, you should organize your network so that several server models subscribe to the same group address. Then you control the group, and not individual servers. E.g. all the lighting fixtures in a room. That also makes it easier to replace light bulbs, as you only need to reconfigure the new light bulb to subscribe to the group address.

    Regarding publishing data, there is a limit for any mesh node on the network of publishing 100 messages during a moving 10 second time frame. I.e. peak throughput will be ten messages per second. This means using group addresses (controling multiple nodes together) is better than individually controlling multiple nodes.

    Getting messages from multiple nodes should not be an issue, other than the general issue of packet collisions if the network gets congested. While there is a limit on throughput originating from a node, there is no limit on relaying or receiving packets.

    Regards,
    Terje

  • Grouping is the option for the customer right. We have no control over it. The customer can group the devices as per his/her needs. (say all the lights in the room as a group, he/she can group them from the mobile application that we are developing which in that case we have no control over the group, its totally the customers requirement.)

    In that case what you recommend is to have several elements having the same model in the gateway publish data to the nodes. (like round robin)?

  • Hi,

    Any configuration of the models, including publish addresses and subscription addresses, are supposed to survive a reset. For that reason, all such configuration values are to be stored in non-volatile memory.

    In our bt mesh stack, the responsibility of storing configuration data to non-volatile memory is partly covered by the stack and partly covered by the model implementation. While the DSM part is done in the stack, the access layer part is done in the model implementation. As such, you only control the access part of it.

    Since the stack is provided as source code you technically have access to disabling flash writes in DSM, but do note that since that constitutes a change of the stack you cannot use our qualification ID for the stack any more if you do so.

    You are not the first customer to ask for this (or similar) scenario, and we are aware of the use case. Hopefully we can do something to better suit this use case in the future, but for the time being I am afraid using several models is the way to go. (Unless you want to qualify the stack yourself, at which point modifying the behavior of DSM becomes a valid option.)

    Regards,
    Terje

  • Hey,

    I get you, but as a company our product (gateway) will have a limited lifetime in that case even if we have more models, given that the write cycles on the NRF52832 is 10000.
    If the number of cycles exceed and gets corrupted, would it run to a mesh assert in the startup, because as I said earlier, I could set the publication address each time I publish data and the retrieved publication address from the flash memory is of no use in my application. If it does run into an mesh assert (because of the corrupted data in the flash) is there a way of ignoring it, and continue the normal operation of the mesh stack.

    Thanks,
    Asiri

  • Hi,

    There is no practical way to avoid asserts like that from happening, given that after an asserts all bets are off as to the overall working of the Mesh stack.

    Dynamically reconfiguring models (without the need for flash writes) is something that I can report to the Mesh team in order to look into for further improvements.

    Regards,
    Terje

  • Hey,

    Could you come up with a solution for reconfiguring models without the need for flash writes?

    Thanks
    Asiri

  • Hi,

    I have sent a request to the Mesh team, to see if either we can get reconfiguration of models without the need to write to flash, or if they have other solutions to your use case.

    Due to summer vacation season here in Norway, it might take some time to get a response. I am sorry for any inconvenience.

    Regards,
    Terje

Reply Children
Related