This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Recommended scan window and scan interval for a slow advertising device

I'm trying to run nRF52840 as central to discover and connect to a sensor. This sensor advertises once every 5 seconds. I find it's time consuming to discover the sensor and usually takes multiple tries to connect to it. What would be recommended scan window and interval in this case?

For connection, would it be better to set a short scan_timeout to trigger BLE_GAP_EVT_TIMEOUT and call sd_ble_gap_connect() again or rather make it long and wait for the sensor to eventually connect?

Thanks

Parents
  • Hi,

    I assume there are no other connection going on, or anything else radio related, on the scanning device. If there is, please share what else is supposed to run concurrently with scanning, as that might impact what are the best parameters for the scan (as well as parameters for the concurrent activity.)

    If you use scan interval slightly above 5 seconds, and scan window equal to scan interval, then in theory you should receive an advertisement within the first advertising event, provided there is no noise or packet collision going on. Maximum scan interval and scan window are both 10.24 seconds. This means you are in RX 100 % of the time, which means high power consumption, but for a short amount of time (until you get connection.) (The alternative is to have lower power consumption for a longer period of time, but with long average delay before getting connection, and total power consumption roughly the same.)

    I don't see any reason for using a short scan timeout as you describe, you should rather scan continuously and when you get an advertisement you connect.

    Regards,
    Terje

  • Hi Terje,

    This central is connected to power outlet, so I don't worry about power consumption. 

    Regarding other activities, yes, I would like to have multiple of this type of sensor connected simultaneously and listen to reading update notification (1Hz rate, <20 bytes each time). So there will be concurrent connection activities. I have been playing with scanning parameters. Even with 100 or 200 ms window and interval, it's not too hard to connect the first one or two units. But once there are 3 or 4 already connected, it becomes harder and harder to get more connected. 

    I also see occasional disconnection right after connection with reason code 0x3e and random disconnection with reason code 0x08. Are these more from the sensor side?

    Edit: I find if I use a larger scan interval and scan window (both 150ms), it's hard to maintain connection with already connected devices (reason code 0x08) while trying to connect to a new device. If I use a smaller scan interval and scan window (both 30ms), the connections are more stable. And I don't feel there is obvious difference in time required to establish connection. My questions:

    1. How does central arrange connection events with already connected devices and establish new connections concurrently?

    2. Any rule to determine proper scan parameters and connection parameters for multiple connections?

    3. Is it necessary to make scan window the same size as scan interval as the purpose is to find and connect to a device instead of receiving data from advertisement?

    My current settings:

    - Peripheral advertises at 5s advertising interval, sends data update at 1Hz rate after connection

    - 20ms min connection interval, 100ms max connection interval, 5s supervision timeout when establishing connection. But the peripheral will request to change connection interval to 240ms and supervision timeout to 750ms, and the central approves

    Thanks

  • Hi,

    With connections going on concurrently with the scanning, a different approach would be better: Keep scan interval equal to scan window, but keep them short. That means 30 ms is better than 150 ms, and you can even experiment with going lower than 30.

    The SoftDevice will, depending on scheduling collisions, either schedule a scan event in its entirety, or not schedule the scan event at all. This means if the event collides with a connection event, there will be no scanning for one full scan window period. For more information on this, see SoftDevice timing-activities and priorities.

    1. See Connection timing as a Central.

    2. See Suggested intervals and windows.

    3. Yes. Regardless of the purpose of receiving the advertisement, you need to receive the full advertisement, preferably as quickly as possible, and so the scan parameter recommendations are the same both for receiving advertising data as an observer, and for receiving advertising data as a scanner/initiator. A connection is initiated by responding to an advertisement.

    Regards,
    Terje

  • Hi Terje,

    I have another sensor (sensor 2) which I have full control of the connection parameters and it could get connected without any disconnection for days. So I'm thinking to replace the sensor mentioned in my initial post (sensor 1) with this. 

    Sensor 1 settings:

    - Advertising interval 5s

    - Connection interval 240ms

    - Slave latency 0

    - Supervision timeout 750ms

    Sensor 2 settings:

    - Advertising interval 180ms

    - Connection interval 90ms

    - Slave latency 0

    - Supervision timeout 4s

    I just did a test to set sensor 2 parameters to be the same as sensor 1 and I began to see similar disconnection occurrences. Then I changed supervision timeout back to 4s and disconnections disappeared. It looks like supervision timeout is the factor here. Besides there will be a time delay to know disconnection actually happens if the sensor moves out of range or turns off because of low battery, what would be other issues if I set supervision timeout long?

    Also reducing connection interval allows more package checks to maintain the connection, but at the cost of more power consumption, is it right?

    Thanks

  • Hi,

    Long supervision timeout means both devices will try to carry on the connection for a longer time. It has biggest impact on central, as central needs to listen every connection interval.

    If you set a slave latency higher than 0, e.g. set it to N, then the peripheral may sleep for up to N connection intervals before participating on the N'th interval. This means the peripheral can save power that way.

    Of course if you have some mechanism to detect that the device is getting out of range, then you can shut down the connection prematurely, but please note that:

    • RSSI is unreliable
    • you need to take into account if the advertisements are still heard, then you may risk, if you are at the edge of connection range, that you will repeatedly get a connection, then disconnect, then reconnect, then disconnect, etc., which would certainly use more power than attempting to continue a "normal" connection that will eventually time out anyway.

    Instead of shorter connection interval to maintain the connection in case of packet loss, you can increase supervision timeout. That way you still use the same amount of power in normal connections (but slightly more when timing out due to longer timeout.)

    Regards,
    Terje

Reply
  • Hi,

    Long supervision timeout means both devices will try to carry on the connection for a longer time. It has biggest impact on central, as central needs to listen every connection interval.

    If you set a slave latency higher than 0, e.g. set it to N, then the peripheral may sleep for up to N connection intervals before participating on the N'th interval. This means the peripheral can save power that way.

    Of course if you have some mechanism to detect that the device is getting out of range, then you can shut down the connection prematurely, but please note that:

    • RSSI is unreliable
    • you need to take into account if the advertisements are still heard, then you may risk, if you are at the edge of connection range, that you will repeatedly get a connection, then disconnect, then reconnect, then disconnect, etc., which would certainly use more power than attempting to continue a "normal" connection that will eventually time out anyway.

    Instead of shorter connection interval to maintain the connection in case of packet loss, you can increase supervision timeout. That way you still use the same amount of power in normal connections (but slightly more when timing out due to longer timeout.)

    Regards,
    Terje

Children
Related