I'm a little confused by the packet rate I'm getting on Gazell. Or rather I'm confused by the low number of 'timeouts' I'm seeing - I'd expect to see more, and would like to understand why I'm not.
I'm currently using mostly default settings for Device and Host, i.e. timeslot period = 600us, rate = 2Mbps, 2 timeslots/channel, 5 channels in the table. I have the channel selection policy set to 'use current' so that the Device hops with the Host, and I'm seeing 1 packet every 1.2ms on average, which makes sense for imperfect frame sychronisation. But what puzzles me is that surely I should see a transmission timeout / failure reported for every timeslot when the Device and Host have slipped out by 1 channel? If sycnhronisation was perfect I'd expect to get a successful transaction every 600us, or 2 per channel, but as I see half that then I would expect the number of failed transactions to roughly equal the number of successful transactions, but I only see a very small handful reported, typically < 10. What am I missing ? Is there another failure 'bucket' that I'm not watching ?
When I change the channel selection policy to 'use successful' the rate drops down by a fact of 10, which I now understand. But again, the reported failures are very low. Does the Device stop transmitting on that channel until it know when the Host will have [probably] returned? And does it only attempt 1 packet transmission on that last successful channel, knowing that the Host is unlikely to receive 2 ? Or does it continue attempting transmission on all slots, eventually getting an Ack from the Host when it comes back on channel? I assume not as it would be a waste of power.
Thanks,
Pete