This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Testing GPS Accuracy

We've recently been testing the accuracy of the GPS on our devices.  My basic method has been to trigger on GPS_TRIG_DATA_READY/GPS_CHAN_NMEA, and at each trigger fetch both the NMEA and PVT data and log everything to the console.  I'll enable the GPS for ten minutes at a time, then analyze the logs later.  We usually get a fix under 60 seconds, giving at least 500 reports with a fix.

Our modem is enabled in LTE-M1 mode with ~40 second EDRX, with data only being passed every 5 minutes.

We understand that the "Accuracy" field in the PVT is not a guarantee distance, but more of a probability.  And we have found that lower Accuracy does correlate with more accurate positioning.  The number of healthy satellites and max C/N0 reading also correlate somewhat, but not quite as well.

We noticed the location move off-target immediately after each 5-minute data burst.  This makes sense since the GPS had to turn off for that interval, and we're willing to live with it since the loss of accuracy is also reflected in the PVT "Accuracy".

I don't see any strong correlation to loss of accuracy around non-data EDRX intervals.  Or, rather, there is nothing apparent in the logs that seems to line up with our EDRX interval.

Our problem is that even when the GPS is in a strong/stable fix, with 10 or more healthy satellites and PVT "Accuracy" less than 5, we still see the locations drift around in an area of roughly 25 meter radius.

For example, I took all the reports from one of my test runs, filtered it to only those where the PVT "Accuracy" had been less than 5, and plotted those.  You can see the result below, with some cars in the parking lot for scale:

This is nowhere near the 5 meter accuracy in the spec for periodic fixes.

My questions I would like answered are these:

  • What is required to achieve the 5m periodic accuracy in the spec?
  • Are there any metrics I can validate against to know that I should be getting the specified accuracy?  I'm thinking of things like a specific number of healthy satellites, C/N0 levels, or PVT "Accuracy".

This testing was done using recent modem firmware (1.1.1-66.rc), NCS based on 1.1.0, and BSDLIB 0.5.1.

  • Hello,

    I just forwarded your questions to the IoT team. I will let you know when I have the answers.

  • Hello again,

    I got this response from the IoT team;

    "Without having the PVT/NMEA logs, it is impossible to know exactly what caused the inaccuracy in this particular case.

    Some general guidelines for measuring best accuracy can be given though (in order of importance):

    1) There needs to be enough satellites, more is always better. Attention should be paid to not just the number of tracked satellites, but the number of satellites that are actually used in the positioning solution. CN0 should be no less than 30 dB (a few can be if there is a high total number of satellites, but the vast majority should be > 30 dB, the higher the better).

    2) Special attention should be paid to the DOP numbers. While these are calculated into the "Accuracy" figure, DOP is a 4-dimensional issue (time being the 4th) while Accuracy is just a scalar and does not cater for all the flavors of DOP.

    In this particular picture, DOP is a prime suspect, as it very typically manifests itself as inaccuracy along a single axis (in the picture, the SW-NE axis has much less variance than the NW-SE axis). Ideally, xDOP should be 1 (meaning no dilution), the higher the number the poorer the accuracy.

    3) In order to avoid multipath interference, the measurement should be made from a rooftop, rather than from street level (particularly important in a dense urban environment).

    4) In order to avoid ionospheric delay, the measurements should be made at night or early morning, rather than in the afternoon. nRF91 can (partially) compensate the ionospheric delay, however the correction parameters are transmitted only once per 12.5 minutes from the satellites. If the nRF91 is continuously tracking for less than 12.5 minutes, or particularly if the nRF91 is doing interval fixing, chances are it has not received said parameters and thus cannot perform the compensation."

  • 1) There needs to be enough satellites, more is always better. Attention should be paid to not just the number of tracked satellites, but the number of satellites that are actually used in the positioning solution. CN0 should be no less than 30 dB (a few can be if there is a high total number of satellites, but the vast majority should be > 30 dB, the higher the better).

    How many satellites as reported by PVT in_fix is "enough"?  Also, in what unit and scaling are the internal CN0 numbers being reported?  Is it just 0.1dB, so dividing by 10 would give dB?

    2) Special attention should be paid to the DOP numbers. While these are calculated into the "Accuracy" figure, DOP is a 4-dimensional issue (time being the 4th) while Accuracy is just a scalar and does not cater for all the flavors of DOP.

    I wasn't logging all of the DOP in my test runs.  I'll probably add the rest in and do another test...

    3) In order to avoid multipath interference, the measurement should be made from a rooftop, rather than from street level (particularly important in a dense urban environment).

    4) In order to avoid ionospheric delay, the measurements should be made at night or early morning, rather than in the afternoon.

    Telling users to preferably take readings from rooftops at night or early morning is a bit frustrating.  I understand that those conditions will give more accurate readings, but only being able to reach your product's specificied accuracy under those conditions feels wrong.  On the other hand, I've worked on radios for years and learned to take any "open air distance" specification with a huge grain of salt.  I suppose this is just the GPS market's equivalent collective fibbing.

    nRF91 can (partially) compensate the ionospheric delay, however the correction parameters are transmitted only once per 12.5 minutes from the satellites. If the nRF91 is continuously tracking for less than 12.5 minutes, or particularly if the nRF91 is doing interval fixing, chances are it has not received said parameters and thus cannot perform the compensation."

    Is there any way for the application layer to tell if those correction parameters have been received and are in use or not?  In our application, we're extremely battery constrained, so we can't stay on continuously.  We might be able to stay on just until those corrections have been received, though.  In other words, I don't think we could justify the battery consumption of turning on the GPS for 12.5 minutes every time we want a fix, but we could justify turning it on just as long as needed until we know the correction is in place, which would be a variable length of time averaging 7 minutes or up to 12.5 minutes worst case.

    Are there any other suggestions or planned features from the IoT team regarding getting the best GPS accuracy while optimizing for battery life?  I haven't dug through the impending 1.2 release candidate changes, and am mostly looking at 1.1, but I see hints of low power GPS modes in headers that no samples use yet.

  • From developers;

    "It is not possible to give a single number of what is "enough", enough depends on the situation. The customer might try to filter the fixes with the most satellites and see if that correlates (inversely) with the positioning error. Dividing the CN0 field with 10 gives the dB number.

    The modem does not pass the information of whether it has the ionosphere correction parameters. The parameters are renewed by the system every 6 days, hence they do not need to be fetched for every fix. During periodic fixing, new ephemerides need to be fetched every hour or so, and the modem will fetch the ionospheric parameters also at some advantageous opportunity. Since the ionospheric delay error is presumably the smallest of the listed inaccuracy causes, if current consumption is a concern, I don't think it necessarily makes sense to wait 12.5 minutes for the corrections.

    Basically in periodic fixing, the modem does the same operations (hot start) regardless of fix interval length. Hence, the positioning accuracy should be approximately the same for any interval, and the interval could be chosen as large as possible for minimum power consumption. For best positioning accuracy, it may be advantageous to have the modem to continuously track for 1-2 minutes before it switches to periodic fixing. This ensures that it finds as many satellites as possible from the beginning, as it does not spend as much time hunting for satellites during subsequent periodic starts."

Related