I have continued to build upon the Nordic sdk10:gzll_ack_payload example using gcc-4.9-2015q1 on Linux. The differences are:
- added UART for debugging (running at 460800 BAUD)
- added RTC for debugging
- running on pca10031 (Nordic usb dongle) instead of pca10028 (Nordic dev board)
All other gazell related configuration remains unchanged.
I get a steady stream of packets from 3 devices (pca10031) connected to a common host (pca10028). Typical packet statistics on the devices are:
- num_tx_attempts = 1
- num_channel_switches = 0
- RSSI ~= -40 dBm
- no calls to nrf_gzll_device_tx_failed()
If I shield the antenna, this increases the TX attempts and channnel switches until calls to nrf_gzll_device_tx_failed() start coming in. So I asssume that in the typical case the data transfer actually succeeds on the first attempt.
1st problem:
The frame rate for each device is different from what I would expect. Changing the set_timeslot_period() to different values result in the following table:
nordic-question-1.png
Any suggestions on why the throughput is only 10% of the expected for small timeslot periods?
2nd problem:
In the typical scenario above I always get 0.11 packets per timeslot (based on gazell timeslot counter from nrf_gzll_get_tick_count()). Any suggestions on why the throughput is so low? I would assume an average between 0.5-1 packets per timeslot would be normal depending on the actual sync between device and host.
Added 2016-01-20:
Screenshot of the my logging application (each sample is the 1 second sliding average). Where the (lower than expected) performance is visualized for a timeslot of 600 us;
- 166 fps
- 0.11 packets per timeslot
- almost non-existent packet loss nordic-question-1b.png
/Pablo