This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Gazell frame rate too low?

I have continued to build upon the Nordic sdk10:gzll_ack_payload example using gcc-4.9-2015q1 on Linux. The differences are:

  • added UART for debugging (running at 460800 BAUD)
  • added RTC for debugging
  • running on pca10031 (Nordic usb dongle) instead of pca10028 (Nordic dev board)

All other gazell related configuration remains unchanged.

I get a steady stream of packets from 3 devices (pca10031) connected to a common host (pca10028). Typical packet statistics on the devices are:

  • num_tx_attempts = 1
  • num_channel_switches = 0
  • RSSI ~= -40 dBm
  • no calls to nrf_gzll_device_tx_failed()

If I shield the antenna, this increases the TX attempts and channnel switches until calls to nrf_gzll_device_tx_failed() start coming in. So I asssume that in the typical case the data transfer actually succeeds on the first attempt.

1st problem:

The frame rate for each device is different from what I would expect. Changing the set_timeslot_period() to different values result in the following table:
nordic-question-1.png

Any suggestions on why the throughput is only 10% of the expected for small timeslot periods?

2nd problem:

In the typical scenario above I always get 0.11 packets per timeslot (based on gazell timeslot counter from nrf_gzll_get_tick_count()). Any suggestions on why the throughput is so low? I would assume an average between 0.5-1 packets per timeslot would be normal depending on the actual sync between device and host.

Added 2016-01-20:

Screenshot of the my logging application (each sample is the 1 second sliding average). Where the (lower than expected) performance is visualized for a timeslot of 600 us;

/Pablo

  • The NRF_GZLL_DEFAULT_TIMESLOT_PERIOD period should be set to the minimum value depending on the on-air datarate (2Mbps:600, 1Mbps:900, and 250kbps: 2700). The host will then switch channel every NRF_GZLL_DEFAULT_TIMESLOT_PERIOD[us] * NRF_GZLL_DEFAULT_TIMESLOTS_PER_CHANNEL[2], e.g. for 2Mbps that equal 1200us. The setting must be the same on both host and device(s). I don't see any reason to use any higher setting, as the lowest should give the best overall latency and throughput. The device will sync to the channel switching of the host, and start transmission on next channel switch when a packet is written to the fifo.

    The device should be able to transmit up to 1 packet during each channel switching on average (e.g.up to 1packet/1200us = 833packets/sec). It will be limited to this value as there will be some clock drift between the device(s) and host, but there is enough time to ensure at least 1 successful transmission. If you have several devices trying to do this, then the effective throughput will reduce drastic, as they will very likely start to collide on-air. I am not sure if this might be the case for you.

    I assume you are using nrf_gzll_add_packet_to_tx_fifo() and then wait for the Gazell callback before calling next nrf_gzll_add_packet_to_tx_fifo().

    For highest throughput you should use nrf_gzll_set_device_channel_selection_policy(NRF_GZLL_DEVICE_CHANNEL_SELECTION_POLICY_USE_CURRENT);

  • Kenneth, I agree entirely with your description of Gazell, this is my view too. So I'll agree with you that on average I should be able to reach 833 pkts/sec @ 600us timeslot period.

    The problem is that I consistently reach 166 pkts/sec @ 600us for many hours of testing.

    • tx_info.num_tx_attempts == 1 for almost all packets when read in nrf_gzll_device_tx_success().
    • verifying against nrf_gzll_get_tick_count() I see that I successfully send one packet every 10th timeslot, not one packet every other timeslot.
    • I can have 3 devices connected to 1 host simultaneously. Each device sends at 166 pkts/sec, meaning that my host services 3*166=498 pkts/sec (which still is below 833 pkts/sec, so this should not be the source of the throttling)
    • I have run at 3 distinctly different geographical locations (more than 5km apart) with exactly the same results.

    I tested with your suggestion of send loop, "using nrf_gzll_add_packet_to_tx_fifo() and then wait for the Gazell callback before calling next nrf_gzll_add_packet_to_tx_fifo()", but the results are the same. Originally I had a slightly different loop that focussed on keeping the pipe's TX FIFO full at all times; while nrf_gzll_get_tx_fifo_packet_count() >=3 go to sleep (__SEV();__WFE();__WFE();). But when there is a spot free in the FIFO I would call nrf_gzll_add_packet_to_tx_fifo() and then return to the while loop waiting to a free spot in the FIFO.

    All this leads me to believe that this is not an interference problem, but rather I have some configuration problem; like priority conflicts, or some clock set too low which results in the device only attempting to send a packet every 10th timeslot...

  • This last comment solved my issues! Thank you. I now see a ~826 packets/second throughput. I now also understand the 166 packets/second I got before. It's maximum throughput divided by channel table size, the default channel table contains 5 elements, so 833 / 5 = 166 packets/second...

    Reading the documentation again it is obvious that this the channel selection policy should be set to USE_CURRENT for maximum throughput. This setting should be the default if you ask me.

Related