This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Mesh Models in a gateway

Hey,

I'm developing a gateway using NRF52832 & ESP12 (UART Communication). Write now what i do is I have a one model (in the client example) for each type of devices in the mesh network (1 RGB Light + 1 Switch + 1 Roller Door + etc..). In this way the ESP sends the element address of the RGB light that i want to switch state, and that received address is set as the publication address for the given type of model handle(in this case, RGB model handle) and the data is published to the mesh. In this way for each of the message that is received from the cloud, the specified address is set to the given model and then the data is published to the desired node. 

This works with no issues right now. But technically is it the most efficient way of publishing data. Should i have one model handle per unicast address in the node or one handle per node or the way I have done is alright?

  • Hi,

    The suggestion to have a pool of several models is intended to be a caching mechanism. I.e. that if you use the same few models often then you don't need to configure those addresses all the time. I.e. preferably you use a heuristic that minimizes the need for publish address reconfigurations.

    While you do have control over flash storage done by the access module, you do not have control over flash storage done by the device state manager (dsm) module. If you do not update the data stored for the access module (but dsm is updated behind the scenes) then you are likely to end up in a bad state after a reset. (Akin to a dangling pointer in c.)

    You may recover from dsm/access inconsistencies by removing the gateway node from the network and reprovision it again on every reset, or at least when you get the mesh assert because of dsm/access being out of sync, but that again would put more strain on the mesh network, involves the provisioner, and leads to more flash erase cycles... This is the reason why it is generally better to have more models (to avoid reconfigurations leading to more flash usage.)

    Regards,
    Terje

  • Hey,

    For the runtime environment, only the volatile memory is used right. (Say setting the publication address of an element, and then publishing it). This depends on the volatile memory or does it depend on the flash memory aswell. Isn't the flash memory used to get data back to the voltaile memory after the reset, or does it have other purposes throughout the code?

    You see for my application, I don't necessarily require saving the publication address of the module of the flash, because each time a message gets published, i could set the publication address of the element. Considering this fact is there a workaround other than having more models? (Even having more models, the cycles of the gateway would be limited, this is what I'm trying to clarify from my end).

    Thanks,
    Asiri

  • Hi,

    Any configuration of the models, including publish addresses and subscription addresses, are supposed to survive a reset. For that reason, all such configuration values are to be stored in non-volatile memory.

    In our bt mesh stack, the responsibility of storing configuration data to non-volatile memory is partly covered by the stack and partly covered by the model implementation. While the DSM part is done in the stack, the access layer part is done in the model implementation. As such, you only control the access part of it.

    Since the stack is provided as source code you technically have access to disabling flash writes in DSM, but do note that since that constitutes a change of the stack you cannot use our qualification ID for the stack any more if you do so.

    You are not the first customer to ask for this (or similar) scenario, and we are aware of the use case. Hopefully we can do something to better suit this use case in the future, but for the time being I am afraid using several models is the way to go. (Unless you want to qualify the stack yourself, at which point modifying the behavior of DSM becomes a valid option.)

    Regards,
    Terje

  • Hey,

    I get you, but as a company our product (gateway) will have a limited lifetime in that case even if we have more models, given that the write cycles on the NRF52832 is 10000.
    If the number of cycles exceed and gets corrupted, would it run to a mesh assert in the startup, because as I said earlier, I could set the publication address each time I publish data and the retrieved publication address from the flash memory is of no use in my application. If it does run into an mesh assert (because of the corrupted data in the flash) is there a way of ignoring it, and continue the normal operation of the mesh stack.

    Thanks,
    Asiri

  • Hi,

    There is no practical way to avoid asserts like that from happening, given that after an asserts all bets are off as to the overall working of the Mesh stack.

    Dynamically reconfiguring models (without the need for flash writes) is something that I can report to the Mesh team in order to look into for further improvements.

    Regards,
    Terje

Related