mesh dfu timeout

Hello

I'm exploring the mesh dfu transfer, get some strange results

Here is the setup

1 distributor, 1 target

On the distributor I start  the transfer with mesh models dfd start 0 0 command

In about 40 minutes it completes, and the image is applied but I get a timeout message on the distributor side

However on the target I see a message from the mcuboot that the new message was applied successfully - and actually it is working just fine

Log from the distributor and the target below

Another question

The progress in the JSON below is "50" which I think is because the function that calculates progress divides by 2:

Function cmd_dfd_receivers_get()  in dfd.c  has this lihe
uint8_t progress = bt_mesh_dfu_cli_progress(&dfd_srv->dfu) / 2;
However if I use 
mesh models dfu cli progress command it returns
DFU progress: 100 %
Bug?
Thank you
Andy
--------------------------------
Logs

Distributor

rtt:~$ [00:40:51.061,126] <dbg> bt_mesh_blob_cli: broadcast_complete: continuing
rtt:~$ [00:40:51.061,157] <dbg> bt_mesh_dfu_cli: dfu_complete:
rtt:~$ [00:40:51.061,187] <dbg> bt_mesh_dfd_srv: dfu_ended: reason: 0, phase: 1, apply: 1
rtt:~$ Distribution phase changed to Applying Update
rtt:~$ [00:40:51.061,218] <dbg> bt_mesh_dfu_cli: apply:
rtt:~$ rtt:~$ [00:40:51.061,248] <dbg> bt_mesh_blob_cli: blob_cli_broadcast: 1 targets
rtt:~$ rtt:~$ [00:40:51.271,606] <dbg> bt_mesh_dfu_cli: handle_status: 0 phase: 6, cur state: 4
rtt:~$ [00:40:51.271,636] <dbg> bt_mesh_dfu_cli: handle_status: Target 0x01a2 receiving transfer
rtt:~$ rtt:~$ [00:41:00.282,928] <dbg> bt_mesh_blob_cli: retry_timeout: 5
rtt:~$ rtt:~$ [00:41:09.441,864] <dbg> bt_mesh_blob_cli: retry_timeout: 4
rtt:~$ rtt:~$ [00:41:18.604,400] <dbg> bt_mesh_blob_cli: retry_timeout: 3
rtt:~$ rtt:~$ [00:41:27.766,143] <dbg> bt_mesh_blob_cli: retry_timeout: 2
rtt:~$ rtt:~$ [00:41:36.924,011] <dbg> bt_mesh_blob_cli: retry_timeout: 1
rtt:~$ [00:41:36.924,041] <dbg> bt_mesh_blob_cli: retry_timeout: Transfer timed out.
rtt:~$ [00:41:36.924,041] <dbg> bt_mesh_blob_cli: drop_remaining_targets:
rtt:~$ [00:41:36.924,072] <wrn> bt_mesh_blob_cli: Dropping 0x01a2: 9
rtt:~$ [00:41:36.924,072] <err> bt_mesh_dfu_cli: Target 0x01a2 failed: 3
rtt:~$ [00:41:36.924,102] <dbg> bt_mesh_blob_cli: broadcast_complete: continuing
rtt:~$ [00:41:36.924,102] <dbg> bt_mesh_dfu_cli: dfu_failed: 3
rtt:~$ [00:41:36.924,133] <dbg> bt_mesh_dfd_srv: dfu_ended: reason: 3, phase: 3, apply: 1
rtt:~$ Distribution phase changed to Failed
rtt:~$ rtt:~$ mesh models dfd receivers-get 0 1
{
"target_cnt": 1,
"targets": {
"0": { "blob_addr": 418, "phase": 6, "status": 3, "blob_status": 9, "progress": 50, "img_idx": 0 }
}
}

-----------------------------------------

Target

*** Booting nRF Connect SDK v3.5.99-ncs1-1 ***
Current image version: 2.1.0+8

~$ Firmware upload started

~$ Firmware upload succeeded
Applying the new firmware
*** Booting nRF Connect SDK v3.5.99-ncs1-1 ***
I: Starting bootloader
I: Primary image: magic=unset, swap_type=0x1, copy_done=0x1, image_ok=0x3
I: Secondary image: magic=good, swap_type=0x2, copy_done=0x3, image_ok=0x3
I: Boot source: none
I: Image index: 0, Swap type: test
I: Starting swap using move algorithm.
I: Bootloader chainload address offset: 0x8000
I: Jumping to the first image slot
*** Booting nRF Connect SDK v3.5.99-ncs1-1 ***
Current image version: 2.1.0+9

Parents
  • Hi, 

    Are you using the unmodified samples or your own?

    Please check if you call bt_mesh_dfu_srv_applied():

    https://github.com/nrfconnect/sdk-nrf/blob/dece2a7f6c36674e2714b8dad7294c911548bb62/samples/bluetooth/mesh/dfu/common/src/dfu_target.c#L268-L278

    https://github.com/nrfconnect/sdk-nrf/blob/dece2a7f6c36674e2714b8dad7294c911548bb62/samples/bluetooth/mesh/dfu/target/src/main.c#L132

    In the new image, are you changing the node's composition data (like the number of models and elements)? I will check if we can reproduce this. Are you changing any timeouts manually?

    "50" is not a bug. The output in the json is formatted in the spec units where transfer progress is calculated in 2 precent step. in other words, the output in json is not a percentage but the spec-defined format

    -Amanda H.

  • I am using unmodified samples you provided. Yes bt_mesh_dfu_srv_applied is called on the target side.

    No composition changes in the image, Just the version number

    Tried with and without a custom timeout, no difference. Image is transferred, applied and runs on the target just fine, but on the distributor side there is an error after the "apply" step. See logs above

    Also, what is phase 10 ? Saw that several times. It's not defined in the headers. Uninitialized variable?

    Another quirk I'd like to resolve is  why it fails to proceed with the batch transfer if one of the target nodes is offline

    Thanks for clarifying the "50" progress thing

  • Tried 8 devices

    On the target side subscribed 

    BT_MESH_MODEL_ID_BLOB_SRV (0x1400) and 
    BT_MESH_MODEL_ID_DFU_SRV  (0x1402)  to the group address 0xC008 
    Then I used the command
    mesh models dfd start 0 0 0xC008 false 7 20
    I get the following
    mesh models dfd start 0 0 0xC008 false 7 20
    { "status": 0, "phase": 1, "group": 49160, "app_idx": 0, "ttl": 7, "timeout_base": 20, "xfer_mode": 1, "apply": 0, "slot_idx": 0 }
    mdist:~$ Starting the firmware distribution.
    Slot:
    Size: 403239 bytes
    FWID: 020100000c000000
    Metadata:
    [00:02:10.500,823] <dbg> bt_mesh_dfd_srv: bt_mesh_dfd_srv_start: Distribution Start: slot: 0, appidx: 0, tb: 20, addr: C008, ttl: 7, apply: 0
    Distribution phase changed to Transfer Active
    [00:02:10.500,976] <dbg> bt_mesh_dfu_cli: initiate:
    [00:02:10.501,007] <dbg> bt_mesh_blob_cli: blob_cli_broadcast: 3 targets
    [00:02:10.667,205] <wrn> bt_mesh_transport: Replay: src 0x0199 dst 0x0356 seq 0x000102
    [00:02:10.749,511] <wrn> bt_mesh_transport: Replay: src 0x0196 dst 0x0356 seq 0x000083
    [00:02:10.750,885] <wrn> bt_mesh_transport: Replay: src 0x0199 dst 0x0356 seq 0x000103
    [00:02:10.787,658] <wrn> bt_mesh_transport: Replay: src 0x0197 dst 0x0356 seq 0x000226
    [00:02:10.810,913] <wrn> bt_mesh_transport: Replay: src 0x01a2 dst 0x0356 seq 0x000082
    [00:02:10.855,224] <wrn> bt_mesh_transport: Replay: src 0x01a0 dst 0x0356 seq 0x0002ac
    [00:02:10.862,030] <wrn> bt_mesh_transport: Replay: src 0x0197 dst 0x0356 seq 0x000227
    [00:02:10.890,319] <wrn> bt_mesh_transport: Replay: src 0x01a2 dst 0x0356 seq 0x000083
    [00:02:10.913,360] <wrn> bt_mesh_transport: Replay: src 0x01a0 dst 0x0356 seq 0x0002ad
    [00:02:10.938,659] <wrn> bt_mesh_transport: Replay: src 0x03e3 dst 0x0356 seq 0x000082
    [00:02:10.954,711] <wrn> bt_mesh_transport: Replay: src 0x03e3 dst 0x0356 seq 0x000083
    [00:02:11.038,574] <wrn> bt_mesh_transport: Replay: src 0x00d6 dst 0x0356 seq 0x000082
    [00:02:11.095,733] <wrn> bt_mesh_transport: Replay: src 0x00d6 dst 0x0356 seq 0x000083
    [00:02:11.128,143] <wrn> bt_mesh_transport: Replay: src 0x03e5 dst 0x0356 seq 0x0001a0
    [00:02:11.205,993] <wrn> bt_mesh_transport: Replay: src 0x03e5 dst 0x0356 seq 0x0001a1
    [

    mdist:~$ mesh models dfd receivers-get 0 20
    {
    "target_cnt": 3,
    "targets": {
    "0": { "blob_addr": 406, "phase": 10, "status": 0, "blob_status": 0, "progress": 0, "img_idx": 0 },
    "1": { "blob_addr": 409, "phase": 10, "status": 0, "blob_status": 0, "progress": 0, "img_idx": 0 },
    "2": { "blob_addr": 995, "phase": 10, "status": 0, "blob_status": 0, "progress": 0, "img_idx": 0 }
    }
    }
    replay warnings continue until eventually I get
    [00:05:55.908,813] <dbg> bt_mesh_blob_cli: retry_timeout: Transfer timed out.
    [00:05:55.908,813] <dbg> bt_mesh_blob_cli: drop_remaining_targets:
    [00:05:55.908,843] <wrn> bt_mesh_blob_cli: Dropping 0x0196: 9
    [00:05:55.908,843] <err> bt_mesh_dfu_cli: Target 0x0196 failed: 3
    [00:05:55.908,843] <wrn> bt_mesh_blob_cli: Dropping 0x0199: 9
    [00:05:55.908,874] <err> bt_mesh_dfu_cli: Target 0x0199 failed: 3
    [00:05:55.908,874] <wrn> bt_mesh_blob_cli: Dropping 0x03e3: 9
    [00:05:55.908,874] <err> bt_mesh_dfu_cli: Target 0x03e3 failed: 3
    [00:05:55.908,905] <dbg> bt_mesh_blob_cli: broadcast_complete: continuing
    [00:05:55.908,935] <dbg> bt_mesh_dfu_cli: transfer:
    [00:05:55.908,966] <dbg> bt_mesh_dfu_cli: dfu_failed: 3
    [00:05:55.908,966] <dbg> bt_mesh_dfd_srv: dfu_ended: reason: 3, phase: 1, apply: 0
    Distribution phase changed to Failed
    Another quirk
    I have 8 devices. Only added 3 addresses to the distributor list, yet get the  dfd_start() callback called on ALL 8 devices. Why?d
    What else do I need to do to make it work?
    Thanks
  • Hi, 

    Unicast transfer time will scale up almost linearly with the number of devices ... with a pretty steep slope (because the distributor sends chunks to each individual device one by one in the round-robin manner). Group transfer time will also scale up linearly but with much much less slope with the number of devices in a group.

    AndyM said:
    get the  dfd_start() callback called on ALL 8 devices.

    What does this mean? Can you clarify this a bit?

    AndyM said:

    Tried using group address to transfer  - results are unexpected.

    It took 2 hours and 40 minutes to transfer to 1 device. Without using a  group address it takes 40 minutes

    I wouldn't be surprised. This happens because during unicast transfer each chunk transfer invokes unicast SAR message. In unicast SAR message transfer, the next Chunk Transfer message cannot be sent until all segments of the previous one have been acknowledged by the receiver. This causes slowness. The slow transfer is beneficial in a large network because you don't want to disrupt the network operation when a DFU transfer is going on by pushing too much traffic into it. 

    For group SAR, there are no segment acknowledgment messages and the sender of the SAR message simply repeats all segments thrice (CONFIG_BT_MESH_SAR_TX_MULTICAST_RETRANS_COUNT=0x02). Once all are sent, the next Chunk Transfer message goes out. So this causes the group transfer to become fast. 

    Our recommendation is to have DFU transfer as slow transfer as possible to reduce the disruption of the network operation for end-users of the mesh network. If this is not a concern, then disregard the recommendation. In older NCS revisions we could not control the maximum allowed chunk size for the BLOB server. But now it can be done via the "CONFIG_BT_MESH_RX_BLOB_CHUNK_SIZE" option. If you are using an older SDK then, for your experiments - I will suggest you to reduce BT_MESH_RX_SDU_MAX (which will inturn reduce the maximum size of the incoming message supported by the node for all mesh messages or just edit blob_srv.c and set CHUNK_SIZE_MAX to smaller or bigger values (say, 30, 40, 60) to experiment with the effect of transfer time on the size of the Chunk Transfer message. We haven't fully characterized this parameter (i.. the size of the Chunk Transfer message) however, my judgment is that an 8-12 segment long chunk transfer message should be a good balance.

    AndyM said:
    [00:02:10.667,205] <wrn> bt_mesh_transport: Replay: src 0x0199 dst 0x0356 seq 0x000102

    The replay warning indicates that the three nodes are somehow sending messages with old sequence numbers. Are you trying to reflash the devices and re-configure the "target" devices (using iOS mobile apps 'provision and reconfigure' feature),  between your experiments? If you do that, your target device's sequence number will reset and all communication from them to the Distributor will be filtered out by the Distributor's replay protection mechanism. To avoid falling into this trap, every time you want to retry the DFU on target devices, I suggest you to, reflash targets without doing flash erase (i.e. do "west flash", instead of doing "west flash --erase") (and as a bonus, you won't have to re-provision and reconfigure them from mobile app).

    -Amanda H.

  • Thank you very much for your insights, that's very helpful

    I still 2 unresolved issues

    1.A transfer to a group failed with  all those replay messages and eventually failed . Yes, I was doing full erase and reprovision of targets between my experiments  but not the distributor   I will start from scratch, reflash and reprovision all devices and report back. However your comments raise a question: What if I need to provision a new device into an existing network? Will a group  DFU fail for that device?

    2. Why was the "dfu_start()" callback called on all 8 devices when only 3 were in the distributor list? I made sure to clear the list, then add 3 devices  before starting dfu. 

  • If you get a timeout error, you most likely need to play with the timeout_base value. They can try to enable logs on target nodes, check the timestamp between the last received message in BLOB model and this log message: https://github.com/zephyrproject-rtos/zephyr/blob/23fb3b77a2b2cf53eff1697888f443236069b30c/subsys/bluetooth/mesh/blob_srv.c#L289 and estimate timeout_base value. 

    As the explanation, for more than 2 devices it is better to use a group address to increase the probability of delivering chunks to target nodes because the transport layer will try to guarantee a delivery of a segmented message to a specific target node. This, however, significantly slows down the dfu transfer if there are more than 2 devices as other devices sit and wait until the transport layer confirms delivery to a specific target node. In this case, a shorter timeout_base value causes timeouts between distributor and target nodes.

    But also because the timeout_base value is used along the whole procedure including the apply phase, you apply phase may take a much longer time for the distributor to confirm that all target nodes applied the firmware. This is because the distributor will sleep for that time until it tries to check again the status of firmware on target nodes.

    AndyM said:
    2. Why was the "dfu_start()" callback called on all 8 devices when only 3 were in the distributor list? I made sure to clear the list, then add 3 devices  before starting dfu. 

    If you are using the group address, all devices subscribed to that address will receive the message. When doing DFU with a group address, you need to subscribe to that group address only those nodes that will participate in the transfer. After completing the DFU they should unsubscribe nodes from that address.

    Regarding the replay warnings, you need to try increasing CONFIG_BT_MESH_MSG_CACHE_SIZE to a much bigger value.

    AndyM said:
    1.A transfer to a group failed with  all those replay messages and eventually failed . Yes, I was doing full erase and reprovision of targets between my experiments  but not the distributor   I will start from scratch, reflash and reprovision all devices and report back. However your comments raise a question: What if I need to provision a new device into an existing network? Will a group  DFU fail for that device?

    That won't fail as long as the new device does not use an already-used address.

    If full erase of the targets is done in between the experiments, those targets should be provisioned with new addresses, then replay issues won't occur. Though, it is important to note that every new address that communicates with the distributor, will occupy one entry in the replay protection list.

  • Thank you so much for the insights

    I'm yet to verify the speed improvements with group address. Will update when I have the data

Reply Children
No Data
Related