This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Best way to implement system clock on nRF51

Hi everybody,

a while ago I used the approach proposed here devzone.nordicsemi.com/.../ to keep the date and time on a NRF51822 device while using Softdevice S110 and the Timeslot advertiser-scanner (github.com/.../nRF51-multi-role-conn-observer-advertiser).

In practice I created an app_timer timer that executes every 250 milliseconds (this is the resolution I wanted) and inside the handler I incremented some variables accordingly to keep date and time. The problem is that I noticed that the time was not accurate, it was wrong, sometimes of several seconds, already after 1 hour. Here is the first question: I think that the time was wrong mainly because the app_timer handler gets delayed by the BLE stack, am I correct?

My idea to improve this is to connect an RTC1 COMPARE event to the TIMER1 COUNT task through PPI. The RTC1 will be configured to fire the event every second. This way I could keep the time in TIMER1 as a Unix timestamp (seconds since 1970-01-01) and I could read the number of milliseconds from RTC1 (I will use the number of ticks together with the prescaler to compute the milliseconds). This way there is no software handler involved and therefore it could not get delayed, is it correct? Is this going to be more precise than the solution using the app_timer? Will this solution consume more power? Do I need to keep on the 16MHz clock?

The implication of this solution is that I cannot use the app_timer library anymore or I could modify it to connect the RTC COMPARE event with the TIMER COUNT task and this should not interfere with the normal operation of the library, right?

I could try to implement the solution but since I am not sure it will work I would like some feedback from you.

Thanks a lot. Alessandro

  • I implemented something along this tonight.

    I used TIMER1 in counter mode and PPI to trigger TASKS_COUNT from EVENTS_OVRFLW.

    The one thing that took be a little bit to figure out is the only way to read the counter value is to trigger a capture, not sure why TIMERn doesn't have a COUNTER field like the RTC.

    So far it seems to work, will see how it is doing tomorrow.

    The one thing I'm not sure about is what to do with app_timer when no timers are pending. The code wants to stop the timer. Once I'm sure things are working with the overflow counter I'll try clearing the CC interrupt.

    Here is my access function:

    void app_timer_ticks(uint32_t *p_overflow, uint32_t *p_ticks)
    {
        uint32_t overflow0, overflow1, counter;
        
        APP_TIMER_OVERFLOW_COUNTER->TASKS_CAPTURE[0] = 1; // trigger capture 
        overflow0 = APP_TIMER_OVERFLOW_COUNTER->CC[0]; // before 
        counter   = NRF_RTC1->COUNTER;
        APP_TIMER_OVERFLOW_COUNTER->TASKS_CAPTURE[1] = 1; // trigger capture 
        overflow1 = APP_TIMER_OVERFLOW_COUNTER->CC[1]; // after 
       
        if (overflow0 != overflow1)
        {
            /* overflow occurred (rare) 
             * we don't know if overflow1 changed before or after 
             * we sampled counter1, so we just sample again 
             */
            counter = NRF_RTC1->COUNTER;
        }
    
        *p_overflow = overflow1;
        *p_ticks    = counter;
    }
    

    When I fetch the time with CTS, I sample this value and that becomes my epoch offset.

    Now if Android bothered to implement CTS this would be so much easier.

  • Yes, this should be fine, but I had in mind a different thing. At the link is the code I wrote to modify app_timer.c, it is a bit speculative because I cannot try it until tomorrow :)

    Code

    Then I call ppi_init() and timer1_init() before rtc1_init(prescaler) and don't clear the COMPARE[1] event in RTC1_IRQHandler.

    I receive the base_time from an andorid phone but I don't use CTS, from the phone I advertise a packet that contains the current unix time in the manufacturer data. Probably it's not the best solution but it was quick to implement :)

    UPDATE 19/10/2015: I have tried it on the hardware and it works fine. I had to use PPI channels 3 and 7 because these are the only available when using the SoftDevice and Timeslot advertiser-scanner library. The problem of course is that the app_timer module cannot manage timers with a period greater than 1 second because the RTC1 is cleared every second in order generate the COMPARE1 event. This means this is not a good solution because it limits the app_timer module a lot.

    The solution proposed by Clem is probably the best, you don't need to update the CC[1] register or clear the RTC1 every second and from the number of overflows and ticks you can extract seconds and milliseconds. Of course the RCT1 cannot be stopped in this case so all the calls to rtc1_stop() should be commented.

  • Thanks to both of you for your ideas. I think Alessandro is right, truncating the app_timer to 1 second is a non-starter for me. It took a couple of reads, but I think I get Clem's solution now. I don't even need milli-second resolution, just 1 second but I can get that with a quick divide of the RTC counter register.

  • So my changes to app_timer seem to be working.

    gist.github.com/.../26da578862ffe083bf72

    Now I just need to figure out a less horrible way of doing long term app_timer events, specifically tasks that happen hourly and daily (synced to real time). My app_timer rewrite for the nrf52 uses int64_t for the internal timestamps and will happily queue events with timeouts >2*24 base ticks. It really won't work on the nrf51 because it makes heavy use of __builtin_clz() which is CLZ instruction on the Cortex-M4, but a pile of code on the Cortex-M0.

    I have it setup to decouple the hardware prescaler from the API, so I always operate on 32768Hz, but under the hood it can be lower frequency. I mostly use timestampGet(), but the other two routines might be useful in some cases.

    typedef struct
    {
        int32_t seconds;       // seconds (signed for delta) 
        int32_t ticks;         // in TIMESTAMP_TICKS_PER_SECOND 
    } timestamp_t;
    
    void timestampGet(timestamp_t *ts)
    {
        uint32_t overflow, ticks, s, t;
    
        app_timer_ticks(&overflow, &ticks);
    
        s = ticks / TIMER_TICKS_PER_SECOND; // becomes >> 
        t = ticks % TIMER_TICKS_PER_SECOND; // becomes & 
    
        ts->seconds =
            overflow * ((1 << RTC_TICKS_BITS) / TIMER_TICKS_PER_SECOND) + s;
        ts->ticks = t * (TIMER_PRESCALER + 1); // timestamp_t.ticks always mod 32768 
    }
    
    /* getTicks32: Convert overflow count and ticks to a 32 bit value 
     *    Note: This rolls over every 36 hours, 35 minutes, 2 seconds for 32768Hz RTC. 
     *    Note: This is in RTC ticks, not timestamp_t.ticks which are always 32768Hz. 
     */
    uint32_t getTicks32(void)
    {
        uint32_t overflow, ticks;
        uint32_t count;
     
        app_timer_ticks(&overflow, &ticks);
       
        count = (overflow << RTC_TICKS_BITS) + ticks;
     
        return count;
    }
     
    /* getTicks64: Convert overflow count and ticks to a 64 bit monotonic value 
     *    Note: This is in RTC ticks, not timestamp_t.ticks which are always 32768Hz. 
     */
    int64_t getTicks64(void)
    {
        uint32_t overflow, ticks;
        int64_t count;
     
        app_timer_ticks(&overflow, &ticks);
     
        count = (((int64_t) overflow) << RTC_TICKS_BITS) + ticks;
     
        return count;
    }
    
  • One thing I forgot, I'm still using a continuously running timer to prevent app_timer from stopping the RTC. I didn't want to go through the effort to figure out exactly how to get it work properly with no events pending. Luckily in my application I use an app_timer + app_sched task to ack the watchdog and that has to run continuously.

Related