This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

SAADC scan + burst?

On SDK16.0.0 + Mesh SDK 4.1.0

I know that oversampling when using the SADDC in scan mode on multiple channels does not work because the buffer gets all out of order.

But I believe burst mode can be used in scan mode where it samples the channel multiple times as fast as possible, averages it, and then puts the result in the buffer in the right order. Burst mode uses the oversample setting for setting how many readings to average. Is this all correct?

Does this work in SDK16.0.0 by simply setting nrf_drv_saadc_config_t .oversample value and nrf_saadc_channel_config_t .burst value? I did this and everything seems to be working, but I don't know if it's actually doing it. I initially tried to use nrf_saadc_burst_set(channel, NRF_SAADC_BURST_ENABLED) to enable burst for each channel, but that did not work and the readings were all wrong.

Or do some modifications need to be made like in this thread? https://devzone.nordicsemi.com/f/nordic-q-a/26659/saacd-scan-oversample/

Thanks.

Parents
  • Hello,

    I believe burst mode can be used in scan mode where it samples the channel multiple times as fast as possible, averages it, and then puts the result in the buffer in the right order.

    This is correct - using scan mode with burst ( with oversampling ) will make the SAADC sample a single channel 2^OVERSAMPLING times as fast as it can, average it, and place it in RAM. This will happen for each channel.

    Does this work in SDK16.0.0 by simply setting nrf_drv_saadc_config_t .oversample value and nrf_saadc_channel_config_t .burst value?

    If you are not using nrfx v.2 - which SDK 16 does not - then you will have to modify an assert of the driver to allow this.
    Particularly line 294 in nrfx_saadc.c, which asserts if oversampling is enabled with multiple channels.

    NRFX_ASSERT((nrf_saadc_oversample_get() == NRF_SAADC_OVERSAMPLE_DISABLED) ||
                    (m_cb.active_channels == 0));



    You must also implement the changes that Jørgen detailed in the answer to the ticket you linked, for best performance.

    Best regards,
    Karl

  • Ok, so I added Jørgen's changes to the end of saadc_init():

    static void saadc_init(void) {
        nrf_drv_saadc_config_t saadc_config = NRF_DRV_SAADC_DEFAULT_CONFIG;
        saadc_config.resolution = NRF_SAADC_RESOLUTION_12BIT;
        saadc_config.oversample = (nrf_saadc_oversample_t)NRF_SAADC_OVERSAMPLE_256X;
        saadc_config.interrupt_priority = NRFX_SAADC_CONFIG_IRQ_PRIORITY;
        saadc_config.low_power_mode = NRFX_SAADC_CONFIG_LP_MODE;
    
        nrf_saadc_channel_config_t channel_0_config = NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN0);
        channel_0_config.gain = NRF_SAADC_GAIN1_6;
        channel_0_config.reference = NRF_SAADC_REFERENCE_INTERNAL;
        channel_0_config.burst = NRF_SAADC_BURST_ENABLED;
        //nrf_saadc_burst_set(0, NRF_SAADC_BURST_ENABLED);
    
        nrf_saadc_channel_config_t channel_1_config = NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN1);
        channel_1_config.gain = NRF_SAADC_GAIN1_6;
        channel_1_config.reference = NRF_SAADC_REFERENCE_INTERNAL;
        channel_1_config.burst = NRF_SAADC_BURST_ENABLED;
        //nrf_saadc_burst_set(1, NRF_SAADC_BURST_ENABLED);
    
        nrf_saadc_channel_config_t channel_2_config = NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN5);
        channel_2_config.gain = NRF_SAADC_GAIN1_6;
        channel_2_config.reference = NRF_SAADC_REFERENCE_INTERNAL;
        channel_2_config.burst = NRF_SAADC_BURST_ENABLED;
        //nrf_saadc_burst_set(2, NRF_SAADC_BURST_ENABLED);
    
        nrf_saadc_channel_config_t channel_3_config = NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN4);
        channel_3_config.gain = NRF_SAADC_GAIN1_6;
        channel_3_config.reference = NRF_SAADC_REFERENCE_INTERNAL;
        channel_3_config.burst = NRF_SAADC_BURST_ENABLED;
        //nrf_saadc_burst_set(3, NRF_SAADC_BURST_ENABLED);
    
        ERROR_CHECK(nrf_drv_saadc_init(&saadc_config, saadc_event_handler));
    
        ERROR_CHECK(nrf_drv_saadc_channel_init(0, &channel_0_config));
        ERROR_CHECK(nrf_drv_saadc_channel_init(1, &channel_1_config));
        ERROR_CHECK(nrf_drv_saadc_channel_init(2, &channel_2_config));
        ERROR_CHECK(nrf_drv_saadc_channel_init(3, &channel_3_config));
    
        //https://devzone.nordicsemi.com/f/nordic-q-a/63074/buffer-order-swap-of-saadc-used-with-nrf-mesh
        nrf_ppi_channel_t saadc_buffer_swap_ppi_channel;
        nrf_drv_ppi_channel_alloc(&saadc_buffer_swap_ppi_channel);
        nrf_drv_ppi_channel_assign(saadc_buffer_swap_ppi_channel,
                                   (uint32_t)&NRF_SAADC->EVENTS_END,
                                   (uint32_t)&NRF_SAADC->TASKS_START);
        nrf_drv_ppi_channel_enable(saadc_buffer_swap_ppi_channel);
    
        __LOG(LOG_SRC_APP, LOG_LEVEL_INFO, "Starting initial SAADC calibration\n");
        while(nrf_drv_saadc_calibrate_offset() != NRF_SUCCESS); //trigger calibration task
        saadc_calibrate = false;
    }

    I commented out the NRF_SAADC_TASK_START on line 128 of nrfx_saadc.c

    Line 294 or nrfx_saadc.c that you mentioned doesn't seem to affect me, it's not asserting with that line as-is.

    It's fine after the initial calibration that I do, but after I trigger another one, the buffer is still out of sync.

    Before these modifications, I ran it for a whole day without having it do any periodic calibration and everything was fine. Are you saying that even if I don't do periodic calibrations that the buffer will eventually mess up?

    Thanks.

  • Hello,

    Thank you for the clear illustration.

    ftjandra said:
    Instead of nrf_pwr_mgmt_run(), I used sd_app_evt_wait(). I think they do the same thing when the Soft Device is used.

    Yes, using sd_app_evt_wait will function just as well - the CPU will be sleeping until an application event is generated.

    ftjandra said:
    I just realized, isn't nrf_drv_saadc_calibrate_offset() being non-blocking already being taken care of in the saadc_event_handler() event handler. This is where I reset the buffers and restart the timer.

    Yes, come to think of it this should already be taken care of, actually. Keenly spotted.
    Then, only the _abort needs to be called from the main context, not the wait_for_calibrate_done.

    A colleague of mine mentioned that he had experienced something similar when testing some combination of burst + scan + oversampling and LP modes.
    Are you currently using the SAADC Low Power mode? -If not, could you set it to 1 and see if this changes anything in regards to your buffers?

    Perhaps you could share you project with me, so that I could attempt to replicate and debug this on my end?
    I can convert the ticket to private if you would like me to, just let me know.

    Best regards,
    Karl

  • Ok, I started from a fresh light switch server and only added what was needed to test this. The project is attached (SDK16.0.0 + Mesh SDK 4.1.0)

    With low_power_mode set to 0, you can see that the buffer gets offset by one after the first calibration.

    With low_power_mode set to 1, it looks like the buffer stays correct, but the timing is not working anymore. It's not sampling every second, but instead it seems to be sampling continuously as fast as possible? What is low power mode?

    Thanks.


    server_light_switch_saadc.zip

  • Hello,

    ftjandra said:
    Ok, I started from a fresh light switch server and only added what was needed to test this. The project is attached (SDK16.0.0 + Mesh SDK 4.1.0)

    Thank you for providing me with this project. I have allocated time to test this tomorrow.

    I have just gotten back from conferring with our SAADC expert, and he had some thoughts on the matter:

    First, on the subject of Low-Power mode negating the buffer shift:
    This might be because Low-Power mode stops the SAADC between every sample, which in turn ensures that the workaround for the Errata 237 is in place.

    To test if this is the case, could you define DEBUG in your preprocessor defines ( like showed in the included picture ) and see if your call to _abort asserts due to hardware timeout when waiting for the SAADC to be stopped?
    In this case, it points to the fact that the SAADC never truly was stopped, and thus the calibration is called as described in the errata I mentioned earlier, leaving an unexpected sample in the buffer.

    ftjandra said:
    What is low power mode?

    The documentation is indeed a little sparse on this, so it is good that you ask! Low-power mode stops the SAADC between each sample.
    It is ideal for low-frequency sampling, however it does add some time to the conversion time.

    ftjandra said:
    With low_power_mode set to 1, it looks like the buffer stays correct, but the timing is not working anymore.

    What is your sampling frequency? Since you are using PPI, there is a change when using Low-Power mode. Instead of the PPI trigger TASK_SAMPLE, it will instead trigger TASK_START, which will add some time to your conversion time.

    Best regards,
    Karl

  • Adding DEBUG to the preprocessor defines didn't change anything, no assert.

    I set the PPI to trigger every second. You make it sound like when low_power_mode is set to 1, it should take longer to sample, but when I set it to 1, it is sampling much faster, around 10 times per second.

    Thanks.

  • Hello,

    Thank you for your patience.
    I just ran the project you provided. My only modifications were to some of the preprocessor include directories, and setting the DEBUG flag.
    I am able to reproduce the issue, and I clearly see the behavior you have described about the buffer being shifted by one following the first calibration, and afterwards staying consistent.

    ftjandra said:
    I set the PPI to trigger every second. You make it sound like when low_power_mode is set to 1, it should take longer to sample, but when I set it to 1, it is sampling much faster, around 10 times per second.

    I tested with LP_mode as well, with the same result.
    I was not able to see the behavior you described here regarding seemingly more frequent sampling occurring in Low-Power mode, but on my end the buffer shift happens with Low-Power mode as well.

    I will now  delve deeper into this issue, and see if I can not figure out its root cause and how to negate it. I will update you as soon as I have something.

    Best regards,
    Karl

Reply
  • Hello,

    Thank you for your patience.
    I just ran the project you provided. My only modifications were to some of the preprocessor include directories, and setting the DEBUG flag.
    I am able to reproduce the issue, and I clearly see the behavior you have described about the buffer being shifted by one following the first calibration, and afterwards staying consistent.

    ftjandra said:
    I set the PPI to trigger every second. You make it sound like when low_power_mode is set to 1, it should take longer to sample, but when I set it to 1, it is sampling much faster, around 10 times per second.

    I tested with LP_mode as well, with the same result.
    I was not able to see the behavior you described here regarding seemingly more frequent sampling occurring in Low-Power mode, but on my end the buffer shift happens with Low-Power mode as well.

    I will now  delve deeper into this issue, and see if I can not figure out its root cause and how to negate it. I will update you as soon as I have something.

    Best regards,
    Karl

Children
  • Hello again,

    I have done some more testing, and I might have identified the source of the error.
    The START task seems to be called too fast after the CALIBRATE DONE event happens.
    I observe that adding a >= 3 ms delay to the CALIBRATE DONE event handler, in before the buffer convert function calls, negates the buffer shift all together.
    I have ran this multiple times now, for extended periods, and not seen any buffer shifts happening. Please try this on your end as well, and let me know what you observe.
    This is the addition which you might make to your project:

        else if(p_event->type == NRF_DRV_SAADC_EVT_CALIBRATEDONE) {
            __LOG(LOG_SRC_APP, LOG_LEVEL_INFO, "SAADC calibration complete\n");
            saadc_calibrate_stage = 0; //reset
          
            // INSERTED DELAY
            // 2 ms delay yields buffer shift by 1 position following 2nd calibration.
            // 3 ms delay yields constant buffer, no shift.
            nrf_delay_ms(3);
    
            //Need to setup both buffers, as they were both removed with the call to nrf_drv_saadc_abort before calibration
            ERROR_CHECK(nrf_drv_saadc_buffer_convert(saadc_buffer_pool[0], SAADC_SAMPLES_IN_BUFFER));
            ERROR_CHECK(nrf_drv_saadc_buffer_convert(saadc_buffer_pool[1], SAADC_SAMPLES_IN_BUFFER));
    
            nrf_drv_timer_enable(&m_saadc_timer);
        }

    You will also have to add the nrf_delay.h file to your project.

    This issue is especially strange since the way you have implemented the CALIBRATE DONE event handler is the normal way to setup the buffers following a calibration, which leads me to believe this might be an artifact of the scan + oversampling + burst configuration. This is just my current suspicion, which I will continue working on to verify. I have opened an internal request to have this reviewed by the module's engineers too.
    Thank you for bringing this up, this is absolutely an interesting find!

    Best regards,
    Karl

  • I can confirm that adding the 3ms delay stops the buffer shift. But isn't it odd that before adding the delay it only shifted once after the first calibration and never again? I would expect it to shift every time.

    I will wait for your follow up before I put this into production firmware.

    Thanks.

  • Hi,

    ftjandra said:
    I can confirm that adding the 3ms delay stops the buffer shift. But isn't it odd that before adding the delay it only shifted once after the first calibration and never again? I would expect it to shift every time.

    Yes, I too find this odd as I would expect it to happen at other times also. All my tests thus far indicate that it does not, however. So, it is absolutely interesting.
    I am in the process of creating a bare-metal demonstration of this behavior, to begin debugging of the hardware and to isolate the driver from the problem.
    I hope to know more about this soon.

    ftjandra said:
    I will wait for your follow up before I put this into production firmware.

    Great, I will get back to you on this as soon as I have got something to share.
    I have asked for a meeting with the HW and SAADC driver engineers as soon as possible to discuss this.

    Best regards,
    Karl

  • Hello again,

    Thank you for your patience.

    I just wanted to update you that I have spoken with our HW engineers and provided them with a minimal application to demonstrate the problem.
    They will take a look and get back to me shortly, before scheduling a deeper investigation into the issue.
    I will keep your updated.

    Best regards,
    Karl

Related