ncs3.2.4 dfu distributor QA

Hi:

What is the maximum number of targets that a DFU distributor can support for simultaneous upgrade, and what is the upper limit? I haven't found any detailed explanation about it in the DFU interface documentation I reviewed

https://docs.nordicsemi.com/bundle/zephyr-apis-latest/page/group_bt_mesh_dfd_srv.html#ga126e2d4df8f58b7de36da46c14b2c804

#define CONFIG_BT_MESH_DFD_SRV_TARGETS_MAX  

Especially, how many can it support at most while ensuring stability in practical applications? This is very important for me to evaluate the project.

Thank you.

Parents
  • Hi,

    For practical applications, the number of concurrent updates depend on the system capacity of the distributor node, most importantly size of the replay protection list (RPL, CONFIG_BT_MESH_CRPL), mechanism of RPL storage, and wear-leveling extra tolerance used for the settings partition.

    If default settings are used, then RPL is stored with continuous writes as per CONFIG_BT_MESH_RPL_STORE_TIMEOUT. It is recommended to leave out 2.5-3x size of the RPL footprint for the settings partition to allow wear leveling (more size is needed if incoming traffic on distributor is high). RPL has 64 byte footprint per entry.

    If you use the emergency data storage (EMDS), then the partition size should be as per the EMDS documentation. However please be aware that the numbers for RRAM write current and write speed in the datasheet are both typical values (not max), so if using EMDS you must define enough tolerance for backup capacitor to allow the entire RPL to be written safely upon power failure.

    In summary, the number of nodes to simultaneously upgrade depends on the RPL size (64 bytes per node) times size increase for wear leveling, as well as capacity for writing the RPL (in the case of using EMDS).

    Regards,
    Terje

Reply
  • Hi,

    For practical applications, the number of concurrent updates depend on the system capacity of the distributor node, most importantly size of the replay protection list (RPL, CONFIG_BT_MESH_CRPL), mechanism of RPL storage, and wear-leveling extra tolerance used for the settings partition.

    If default settings are used, then RPL is stored with continuous writes as per CONFIG_BT_MESH_RPL_STORE_TIMEOUT. It is recommended to leave out 2.5-3x size of the RPL footprint for the settings partition to allow wear leveling (more size is needed if incoming traffic on distributor is high). RPL has 64 byte footprint per entry.

    If you use the emergency data storage (EMDS), then the partition size should be as per the EMDS documentation. However please be aware that the numbers for RRAM write current and write speed in the datasheet are both typical values (not max), so if using EMDS you must define enough tolerance for backup capacitor to allow the entire RPL to be written safely upon power failure.

    In summary, the number of nodes to simultaneously upgrade depends on the RPL size (64 bytes per node) times size increase for wear leveling, as well as capacity for writing the RPL (in the case of using EMDS).

    Regards,
    Terje

Children
No Data
Related