Mesh back and forth seems to break connection

Hi,

We have one customer having two CoAP hosts and some CoAP clients in the form of wireless sensors. The sensors are paired to a single host. The pairing is actually in the app level, where the sensor discovers the network IP of the host in pairing host. All the devices have the same PANID and network key.

Recently we have seen a scenario where some sensors seemingly stopped communication with the paired host. By looking at the RSSI graphs, we thought is this caused by a sensor constantly swinging back and forth between two hosts (one host acting as a router). We dont have access to the CLI interface of the hosts as this is a remote site. We see the Sensor RSSI reported back. This is its RSSI with the router/leader immediately connected to at the time.  

Any ideas?

Cheers,

Kaushalya

Parents
  • Hello,

    Can you please try to capture a sniffer trace using the nRF Sniffer for 802154?

    What do you mean by "swinging back and forth between two hosts"? Do you mean that it (an End Device, I assume) keeps changing between two routers?

    Does it ever re-enter the network, or does it disconnect completely?

    Do the nodes move around (physically)? Or are the nodes more or less stationary?

    Best regards,

    Edvin

  • kaushalyasat said:

    Ok I got your point in Q1. I was not sure the order the #defines are evaluated.

    I agree that this is confusing, and what if it says this in several files? It is not easy to know the order that the files are included. So I do indeed see this confusion. To be sure, I suggest that you print out the value of OPENTHREAD_CONFIG_CHILD_SUPERVISION_ENABLE to your log, to ensure that it is in fact set to 1. Since this will be set during compile time, whatever is written to your log should be valid throughout the application. 

    kaushalyasat said:
    My idea is FTD will send supervision data every 30 sec and SED will initiate a MLE attach if it doesn't get
    this in 60 sec. Am I correct?

    That is correct. Please see:

    https://openthread.io/guides/build/features/child-supervision

    However, setting these alone doesn't ensure that the SED wakes up. 

    CONFIG_OPENTHREAD_POLL_PERIOD decides how often the SED will wake up to listen for packets from it's parent. I am not sure whether you changed it, but I see that the default value for this is 30 seconds in some of our samples (overlay-mtd.conf). 

    You may want to set your OPENTHREAD_CONFIG_CHILD_SUPERVISION_CHECK_TIMEOUT to a little more than 60 seconds, e.g. 70. This is to allow for a second attempt in the case where the initial child supervision interval message reach the child. Then it has a second attempt before deciding to leave the network.

    kaushalyasat said:

    When I set childsupervision checktimeout to 10 in SED, I see "<wrn> [W] ChildSupervsn-: Supervision timeout. No frame from parent in 10 sec". But cant see any MLE reattachment.

    At this time the childsupervision interval in FTD is 129.

    Then I set childsupervision interval in FTD to 5, hoping this would stop the previous message in SED, but it didnt.

    Even after I set it back to its original 129 value, the SED keep on sending the same message.

    I am really puzzled what is going on here.

    I believe this is all caused because the poll period is too long. Even if you set the child supervision interval and check timeout to 5 and 10 seconds, the poll period for the child is still probably 30 or 236 seconds unless you specified it to be something else, and in that case, the child will not wake up until these intervals have passed. 

    Best regards,

    Edvin

  • Hi Edvin,

    My poll period is 10 sec, as set in overlay_mtd.conf. The child supervision interval in FTD is 30 and checkout time in SED were #defined to 60, but when I check it from CLI, it was the default 129 and 190. Even after I print these values to the log messages, my #define values seems to get overwritten. So I used the API functions otChildSupervisionSetCheckTimeout() in SED and otChildSupervisionSetInterval() in FTD. After this, I can see the values are 30 and 60 as I expected. I think this part is solved. Now when I set the interval to like 70, I get the "ChildSupervsn-: Supervision timeout. No frame from parent in 60 sec" from SED. When interval is set to 30, I see "ChildSupervsn-: restart timer" from SED. So apparently child supervision mechanism is functioning. 

    Also I enabled CONFIG_OPENTHREAD_LOG_LEVEL_INFO=y in prj.conf of the FTD and I can see "ChildSupervsn-: sec sinse ast sup [number]" regularly from FTD. What is this message?

    Now the only thing is I still cant see the full effect of child supervision as I cant manually drop a SED from the child table. I can hide the SED in a Faraday pouch until it FTD drops it from its child table, but when I take the SED out, it connects back even without the child supervision feature, after backoff periods. Even after I increase supervision interval from the FTD, the SED says its supervision timeouts, but I cant see any console messages indicating MLE reconnect. Can this be seen from a wireshark session? What should I look for?

    Thanks,

    Kay

  • kaushalyasat said:
    Also I enabled CONFIG_OPENTHREAD_LOG_LEVEL_INFO=y in prj.conf of the FTD and I can see "ChildSupervsn-: sec sinse ast sup [number]" regularly from FTD. What is this message?

    I would assume that this is the time since the last supervision message. Does that seem to be correct, according to your supervision interval?

    kaushalyasat said:
    Now the only thing is I still cant see the full effect of child supervision as I cant manually drop a SED from the child table. I can hide the SED in a Faraday pouch until it FTD drops it from its child table,

    Or you can power off, or halt one of the devices, the parent  or the child, in case this is practically easier to do. 

    kaushalyasat said:
    Now the only thing is I still cant see the full effect of child supervision

    I am not sure I understand you correctly here. Do you expect the child to be completely removed from the network? Because I don't think that this is the case. I believe that this is intended to keep track of what devices that are active in your network. If it drops out due to child supervision timeout, then it no longer makes sense to send messages to that device. But you wouldn't typically want a device that runs out of battery to actually drop out of the network, so that you would need to re-commission it when you change the batteries. Or in case of a power outage, you don't want to need to set up the entire network from scratch when the power comes back on. 

    Best regards,

    Edvin

  • Hi Edvin,

    Now it seems correctly working based on console messages and their timing. 

    On SED console

    [00:00:36.580,535] <inf> [N] ChildSupervsn-: restart timer
    [00:01:06.582,611] <inf> [N] ChildSupervsn-: restart timer
    [00:01:36.582,122] <inf> [N] ChildSupervsn-: restart timer

    On FTD console

    [01:11:59.735,687] <inf> [I] ChildSupervsn-: sec sinse ast sup 28
    [01:12:00.731,903] <inf> [I] ChildSupervsn-: sec sinse ast sup 29
    [01:12:01.736,145] <inf> [I] ChildSupervsn-: Sending supervision message to child 0xcc01
    [01:12:02.740,173] <inf> [I] ChildSupervsn-: sec sinse ast sup 1
    [01:12:03.738,311] <inf> [I] ChildSupervsn-: sec sinse ast sup 2

    This seems FTD counts supervision interval every 1Sec and every 30 sec it sends supervision packet t to the child? (I have set the interval to 30 sec in FTD and timeout to 60 in SED.) I think in this instance the child supervision is functioning.

    "Now the only thing is I still cant see the full effect of child supervision" What I meant was I cant see the child supervision bringing back a SED from this 'fallen off' state. For me to verify the effect of child supervision, I have to see a child which has dropped off the child table, rejoining the network. 

    What I have seen when this drop off issue comes up in the field, is that the child has dropped from the Leaders child table. I don't think the child stopped sending the poll packet for longer than 240 sec. There were no other routers for the child to jump to as well. So I have the feeling this may be some Thread related issue. 

    Cheers,

    Kaushalya

  • Even without child supervision, the SEDs reattach to the network and starts sending data to the paired FTD. Like if I take the SED out of reach and back I can see the disconnection reconnection. So it doesn't seem in this rare problem, the SEDs actually disconnect from the network from FTD's side but seemingly connected from SEDs side. 

Reply Children
  • Hello,

    Child supervision isn't used to determine whether a child is allowed to re-join the network or not. It is just a way of keeping track of the current status of the child, whether it is still alive or not. If the child supervision times out, then the child is considered disconnected from the network (but still allowed to rejoin). 

    I think we are a bit off track. At least I am. We are trying to figure out why some devices are suddenly not reachable from the network, right? Or, you said that some devices that are initially routers suddenly become leaders? This would suggest that they have determined that they can't reach the rest of the network, forming a new network with itself as the leader. 

    It is a common mechanism in openthread to partition into two separate networks, each maintaining the operational status as best as it can. These networks will have the same name and the same PANID. If the two networks are brought back together, they will merge into a single partition. These nodes typically don't move around, but they may not be able to reach eachother if a particular node (router) that is a single device within range of both partitions (but the rest of the partitions are not able to reach the other partition) goes offline/runs out of battery. 

    Since you mentioned that the device is suddenly a leader, it suggests that something like this has happened. Then we were trying to investigate whether something particular happened right before these nodes fall out, right? Were you able to catch some logs during this events from the devices that fall out? 

    kaushalyasat said:

    [00:00:36.580,535] <inf> [N] ChildSupervsn-: restart timer
    [00:01:06.582,611] <inf> [N] ChildSupervsn-: restart timer
    [00:01:36.582,122] <inf> [N] ChildSupervsn-: restart timer

    On FTD console

    [01:11:59.735,687] <inf> [I] ChildSupervsn-: sec sinse ast sup 28
    [01:12:00.731,903] <inf> [I] ChildSupervsn-: sec sinse ast sup 29
    [01:12:01.736,145] <inf> [I] ChildSupervsn-: Sending supervision message to child 0xcc01
    [01:12:02.740,173] <inf> [I] ChildSupervsn-: sec sinse ast sup 1
    [01:12:03.738,311] <inf> [I] ChildSupervsn-: sec sinse ast sup 2

    This is during normal operation, right? It is not from an instance when the child falls out of the child table?

    Best regards,

    Edvin

  • Hi Edvin,

    Child supervision isn't used to determine whether a child is allowed to re-join the network or not. It is just a way of keeping track of the current status of the child, whether it is still alive or not. If the child supervision times out, then the child is considered disconnected from the network (but still allowed to rejoin). 

    I dont intend to use child supervision as a mechanism to determine weather a child node can rejoin or not. I intend it to rejoin, if for some reason it falls off from the leader. 

    We are trying to figure out why some devices are suddenly not reachable from the network, right?

    Yes, more precisely, why all of a sudden some SEDs fall off from the child table of the leader.

    you said that some devices that are initially routers suddenly become leaders? This would suggest that they have determined that they can't reach the rest of the network, forming a new network with itself as the leader. 

    Not really. Normally we have only one Leader and many child devices. When I first started this thread, we saw this happening in a customer who had two FTDs. Since we havent see this behavior until then we thought this could be SEDs connecting back and forth with each FTD. But since then we have see this drop off happening in places with one FTD. 

    It is a common mechanism in openthread to partition into two separate networks, each maintaining the operational status as best as it can.

    For this to happen, I guess there should be more than one FTD? So this cant explain the issue happening in systems with one leader. 

    When we look at RSSI data, we cant see any RSSI drops on the SEDs before they go disconnect either.

    Since you mentioned that the device is suddenly a leader, it suggests that something like this has happened.

    Now I dont think this is the case anymore. We have seen this in multiple cases where there is only one leader. 

    Were you able to catch some logs during this events from the devices that fall out? 

    Unfortunately no. Since this is a very rare occurrence.  Only ones we have captured are once this happens. That's how we know the that the leader has dropped the child from child table while the child seemingly still in the network. The console is disabled in the child, so we cannot inquire or see anything. Unfortunately we forgot about the RTT. We could have used the RTT viewer at least to see some log from a child in this state. 

    Assuming that the child actually stopped sending the poll for 240sec, what could trigger such a behavior? The SED's UI was functioning. The only thing I can think is some openthread threads stopped working. 

    Now I have added thread analyzer to the SEDs to detect if all the openthread threads are active. We have deployed about 20 SEDs now with child supervision to see if we get this again. I couldnt find a way to forcefully drop a child off the child table to verify the reattach mechanism in action. 

    Cheers,

    Kaushalya 

  • Hi Edvin,

    Unfortunately I have seen some sensors falling off over the weekend with child supervision enabled. 

    This is a capture of the app level.

    I will gather more details on them this evening.

    First thing is that the fall off in 23rd around 2.15pm, is actually we loosing wifi connection, as we get these graphs over our dashboard, which needs system to connect to WiFi to send all these data to cloud. Wifi was back in 25th 8.30am like. But the thing is S2, S5 and S7 seems not connected. 

    The S7 sensor has a low battery around 2.4V. We have seen sensors not working in this level. So S7 falling off may have been a power issue. 

    Cheers,

    Kaushalya

  • Hi, We continue to see these SEDs failing over time. I have 4 SED now in the lab which seemingly 'fell off' the network. I dont have any wireshark captures of them before this happening, but I have them after. 

    1. To filter out wireshark captures, I want to filter based on Extended MAC of the SEDs. But I cant seem to find any field in the captured packets which contains the Ext MAC. Is there a way to target the ExtMAC of SEDs from wireshark?

    2. From one of these fallen off SEDs, I can see the RTT viewer (console is disabled in SEDs). By that I can see the SED apparently sends data out, but I cant see these packets from wireshark. I have filtered based on RLOC16 of the destination server, which should be receiving these packets. As I dont know a way to filter based on ExtMAC of the SED I cant directly target the SED. I dont know the RLOC16 of the SED. At the moment, we have many CoAP hosts in the lab, so I dont know the path this SED has taken as well. Can you give me a way how to find the packets send by this SED from wireshark? I only know the ExtMAC of the SED. 

    Any help is much appreciated.

    Cheers,

    Kaushalya  

  • You can apply any field that you see in a packet as a filter. Just right click it, select "Apply as Filter" -> "Selected":

    This will paste this as a filter in the top of WireShark. You can also use logic expressions, like || and && : 

    However, if you can't find that field in the packet, it is not part of the packet itself, and in that case, the filter will not be able to pick it up (it will be filtered out). 

    I would assume the information is present in the trace. Can you upload it? Does it contain the data all the way from the start? There must be some packets where the RLOC address is assigned to the node.

    BR,
    Edvin

Related