Mesh back and forth seems to break connection

Hi,

We have one customer having two CoAP hosts and some CoAP clients in the form of wireless sensors. The sensors are paired to a single host. The pairing is actually in the app level, where the sensor discovers the network IP of the host in pairing host. All the devices have the same PANID and network key.

Recently we have seen a scenario where some sensors seemingly stopped communication with the paired host. By looking at the RSSI graphs, we thought is this caused by a sensor constantly swinging back and forth between two hosts (one host acting as a router). We dont have access to the CLI interface of the hosts as this is a remote site. We see the Sensor RSSI reported back. This is its RSSI with the router/leader immediately connected to at the time.  

Any ideas?

Cheers,

Kaushalya

Parents
  • Hello,

    Can you please try to capture a sniffer trace using the nRF Sniffer for 802154?

    What do you mean by "swinging back and forth between two hosts"? Do you mean that it (an End Device, I assume) keeps changing between two routers?

    Does it ever re-enter the network, or does it disconnect completely?

    Do the nodes move around (physically)? Or are the nodes more or less stationary?

    Best regards,

    Edvin

  • Even without child supervision, the SEDs reattach to the network and starts sending data to the paired FTD. Like if I take the SED out of reach and back I can see the disconnection reconnection. So it doesn't seem in this rare problem, the SEDs actually disconnect from the network from FTD's side but seemingly connected from SEDs side. 

  • Hello,

    Child supervision isn't used to determine whether a child is allowed to re-join the network or not. It is just a way of keeping track of the current status of the child, whether it is still alive or not. If the child supervision times out, then the child is considered disconnected from the network (but still allowed to rejoin). 

    I think we are a bit off track. At least I am. We are trying to figure out why some devices are suddenly not reachable from the network, right? Or, you said that some devices that are initially routers suddenly become leaders? This would suggest that they have determined that they can't reach the rest of the network, forming a new network with itself as the leader. 

    It is a common mechanism in openthread to partition into two separate networks, each maintaining the operational status as best as it can. These networks will have the same name and the same PANID. If the two networks are brought back together, they will merge into a single partition. These nodes typically don't move around, but they may not be able to reach eachother if a particular node (router) that is a single device within range of both partitions (but the rest of the partitions are not able to reach the other partition) goes offline/runs out of battery. 

    Since you mentioned that the device is suddenly a leader, it suggests that something like this has happened. Then we were trying to investigate whether something particular happened right before these nodes fall out, right? Were you able to catch some logs during this events from the devices that fall out? 

    kaushalyasat said:

    [00:00:36.580,535] <inf> [N] ChildSupervsn-: restart timer
    [00:01:06.582,611] <inf> [N] ChildSupervsn-: restart timer
    [00:01:36.582,122] <inf> [N] ChildSupervsn-: restart timer

    On FTD console

    [01:11:59.735,687] <inf> [I] ChildSupervsn-: sec sinse ast sup 28
    [01:12:00.731,903] <inf> [I] ChildSupervsn-: sec sinse ast sup 29
    [01:12:01.736,145] <inf> [I] ChildSupervsn-: Sending supervision message to child 0xcc01
    [01:12:02.740,173] <inf> [I] ChildSupervsn-: sec sinse ast sup 1
    [01:12:03.738,311] <inf> [I] ChildSupervsn-: sec sinse ast sup 2

    This is during normal operation, right? It is not from an instance when the child falls out of the child table?

    Best regards,

    Edvin

  • Hi Edvin,

    Child supervision isn't used to determine whether a child is allowed to re-join the network or not. It is just a way of keeping track of the current status of the child, whether it is still alive or not. If the child supervision times out, then the child is considered disconnected from the network (but still allowed to rejoin). 

    I dont intend to use child supervision as a mechanism to determine weather a child node can rejoin or not. I intend it to rejoin, if for some reason it falls off from the leader. 

    We are trying to figure out why some devices are suddenly not reachable from the network, right?

    Yes, more precisely, why all of a sudden some SEDs fall off from the child table of the leader.

    you said that some devices that are initially routers suddenly become leaders? This would suggest that they have determined that they can't reach the rest of the network, forming a new network with itself as the leader. 

    Not really. Normally we have only one Leader and many child devices. When I first started this thread, we saw this happening in a customer who had two FTDs. Since we havent see this behavior until then we thought this could be SEDs connecting back and forth with each FTD. But since then we have see this drop off happening in places with one FTD. 

    It is a common mechanism in openthread to partition into two separate networks, each maintaining the operational status as best as it can.

    For this to happen, I guess there should be more than one FTD? So this cant explain the issue happening in systems with one leader. 

    When we look at RSSI data, we cant see any RSSI drops on the SEDs before they go disconnect either.

    Since you mentioned that the device is suddenly a leader, it suggests that something like this has happened.

    Now I dont think this is the case anymore. We have seen this in multiple cases where there is only one leader. 

    Were you able to catch some logs during this events from the devices that fall out? 

    Unfortunately no. Since this is a very rare occurrence.  Only ones we have captured are once this happens. That's how we know the that the leader has dropped the child from child table while the child seemingly still in the network. The console is disabled in the child, so we cannot inquire or see anything. Unfortunately we forgot about the RTT. We could have used the RTT viewer at least to see some log from a child in this state. 

    Assuming that the child actually stopped sending the poll for 240sec, what could trigger such a behavior? The SED's UI was functioning. The only thing I can think is some openthread threads stopped working. 

    Now I have added thread analyzer to the SEDs to detect if all the openthread threads are active. We have deployed about 20 SEDs now with child supervision to see if we get this again. I couldnt find a way to forcefully drop a child off the child table to verify the reattach mechanism in action. 

    Cheers,

    Kaushalya 

  • Hi Edvin,

    Unfortunately I have seen some sensors falling off over the weekend with child supervision enabled. 

    This is a capture of the app level.

    I will gather more details on them this evening.

    First thing is that the fall off in 23rd around 2.15pm, is actually we loosing wifi connection, as we get these graphs over our dashboard, which needs system to connect to WiFi to send all these data to cloud. Wifi was back in 25th 8.30am like. But the thing is S2, S5 and S7 seems not connected. 

    The S7 sensor has a low battery around 2.4V. We have seen sensors not working in this level. So S7 falling off may have been a power issue. 

    Cheers,

    Kaushalya

  • Hi, We continue to see these SEDs failing over time. I have 4 SED now in the lab which seemingly 'fell off' the network. I dont have any wireshark captures of them before this happening, but I have them after. 

    1. To filter out wireshark captures, I want to filter based on Extended MAC of the SEDs. But I cant seem to find any field in the captured packets which contains the Ext MAC. Is there a way to target the ExtMAC of SEDs from wireshark?

    2. From one of these fallen off SEDs, I can see the RTT viewer (console is disabled in SEDs). By that I can see the SED apparently sends data out, but I cant see these packets from wireshark. I have filtered based on RLOC16 of the destination server, which should be receiving these packets. As I dont know a way to filter based on ExtMAC of the SED I cant directly target the SED. I dont know the RLOC16 of the SED. At the moment, we have many CoAP hosts in the lab, so I dont know the path this SED has taken as well. Can you give me a way how to find the packets send by this SED from wireshark? I only know the ExtMAC of the SED. 

    Any help is much appreciated.

    Cheers,

    Kaushalya  

Reply
  • Hi, We continue to see these SEDs failing over time. I have 4 SED now in the lab which seemingly 'fell off' the network. I dont have any wireshark captures of them before this happening, but I have them after. 

    1. To filter out wireshark captures, I want to filter based on Extended MAC of the SEDs. But I cant seem to find any field in the captured packets which contains the Ext MAC. Is there a way to target the ExtMAC of SEDs from wireshark?

    2. From one of these fallen off SEDs, I can see the RTT viewer (console is disabled in SEDs). By that I can see the SED apparently sends data out, but I cant see these packets from wireshark. I have filtered based on RLOC16 of the destination server, which should be receiving these packets. As I dont know a way to filter based on ExtMAC of the SED I cant directly target the SED. I dont know the RLOC16 of the SED. At the moment, we have many CoAP hosts in the lab, so I dont know the path this SED has taken as well. Can you give me a way how to find the packets send by this SED from wireshark? I only know the ExtMAC of the SED. 

    Any help is much appreciated.

    Cheers,

    Kaushalya  

Children
  • You can apply any field that you see in a packet as a filter. Just right click it, select "Apply as Filter" -> "Selected":

    This will paste this as a filter in the top of WireShark. You can also use logic expressions, like || and && : 

    However, if you can't find that field in the packet, it is not part of the packet itself, and in that case, the filter will not be able to pick it up (it will be filtered out). 

    I would assume the information is present in the trace. Can you upload it? Does it contain the data all the way from the start? There must be some packets where the RLOC address is assigned to the node.

    BR,
    Edvin

  • Hi Edvin,

    Apr-16-1.pcapng

    This is one of the logs. 

    1. How can I filter RLOC address assignment packets? 

    2. In SED to FTD transmissions, I cant seem to decrypt 802.15.4 packets but FTD to FTD packets are fully decryptable. 

    How can I fix this? I have only one networkkey, which is 0x00112233445566778899aabbccdd0001

    Cheers,

    Kaushalya

  • Also I came across this thread from an old devzone ticket.

     nRF Sniffer integration for 802.15.4 in a python scipt (Pcap file problems) 

    Here Nordic engineer mentions that extended address is required in a packet to decrypt and this can be 'fixed' by moving packets with extended address to the top. I tried doing this with the attached log, but I am not 100% sure how to do it. 

    Can you shed some light?

    Thanks,

    Kaushalya

  • Also from the log I think I can see that the sensors (SEDs) which has disappeared from the network actually is connected to the network from the sensors point of view. I can see my log message just before  calling the coap_send_request (). Following is the code section.

    ...
    LOG_INF ("ZS %d, RSSI %d, LQI %d, LQO %d, FW %04x", the_sensor_device->zoneState, RSSI, linkQalIn, linkQualOut, FWRevNum);
    ...
    coap_send_request(COAP_METHOD_PUT, (const struct sockaddr *)&unique_local_addr, sensor_option, payload, sizeof(payload), NULL);
    ...
    

    Here I have not handled the return value from coap_send_request (), which is my bad. But I dont get any error logs from this. But I cant see the packet being transmitted in my wireshark logs. So I have the feeling that this is something related to either CoAP stack or the OpenThread stack. 

    There could be possibility that I dont see the actual packet in wireshark log because I dont have sufficient data to filter these. As I said, I dont know the RLOC of these sensors, I only know their MAC. But I cant easily filter based on the MAC as I cant see it in any frames.

    How can we further debug this?

    Cheers,

    Kaushalya

  • kaushalyasat said:
    Can you shed some light?

    I believe the takeaway from this thread is that if the sniffer didn't pick up the packets where the nodes joined the network (for the first time), so that it uses it's extended address and was assigned a short address, the sniffer doesn't know how to map the short RLOC16 addresses to the extended addresses, and hence, it can't decrypt the packets to/from these devices. 

    kaushalyasat said:
    Here I have not handled the return value from coap_send_request (),

    So is it possible to check these return values? If you don't see the packets, it may mean that they are never sent. And if that is the case, then a clue is probably found in the return value from this function. 

    kaushalyasat said:
    How can we further debug this?

    Check the return value for coap_send_request() when the packets aren't sent correctly. 

    Then, try to reset the entire network, or at least factory reset the sensor devices, so that they do a new provisioning sequence when you turn them on. Then you need to enable the sniffer before you provision the devices, so that the sniffer can pick up the extended addresses being used before they are assigned an RLOC16 (short) address. You can experiment with this in a small scale in your office. Set up a small network with two devices, and try starting the sniffer before the provisioning process, and after, and compare the results. 

    Best regards,

    Edvin

Related