This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

Offset in SAADC samples with Easy DMA and BLE

I have a nrf52 application that samples four saadc channels at 1kHZ. That is: Map four pins to ADC input and let Easy DMA take care of the sampling so that the data is processed ten times a second (100 * 4 samples). This works pretty well, except...

When I enable the BLE connection, the data is shifted in the buffer. Without BLE enabled, the data layout in the memory is as following {{1,2,3,4}, {1,2,3,4}, ...}. But, when BLE is activated, the memory layout is: {{4,1,2,3}, {4,1,2,3}, ...} I really don't know what causes the difference. I have no way to check if the data is shifted, or did the samples just swap places. I wonder if the softdevice blocks some of the samples that would cause the problem.

The saadc implementation is double buffered, like in "saadc_sample_from_two_pins - scan mode" here

The BLE implementation is based on ble_app_hrs_freertos in SDK 12.1.0. That is also the SDK version I'm using.

Any help would be appreciated.

Parents
  • Hi,

    There have actually been some progress with this issue. The reason why this swap/shift in the buffer occurs, is related to how samples are triggered, and how events are handled.

    The EasyDMA chapter in the SAADC documentation show the order of tasks and events for proper operation.

    The problem arise when the SAADC is configured in continuous mode using PPI to trigger the SAMPLE task at a regular interval, while the END event and START task is handled by CPU interrupt. When the SAMPLE task is triggered, each channel is sampled and written to RAM with DMA as fast as possible. When the buffer have been filled, the DMA transfer will be delayed until the START task have been triggered. You are triggering the START task in the interrupt handler after receiving the END event. If you receive the END event when IRQ is disabled or an interrupt with higher priority is executing, the triggering of the START task can get delayed until after the SAMPLE task have been triggered using PPI. Triggering of the SAMPLE task will generate a DMA transfer request, but this request will not be acknowledged until the START task have been triggered. The scan cycle of the SAADC will however expect the DMA transfer to finish, and will sample next channel. When the START task is triggered, the pending DMA transfer will be executed, but the transferred sample will correcpond to the latest sampled channel. Samples from previous channels will have been lost.

    There are two possible solutions to this problem:

    1. Use PPI to trigger START task on an END event. This will avoid the delayed triggering og the START task due to a queued interrupt generated by the END event, but in the case of high sample frequency and long delays, it can cause your buffer to be overwritten before you are able to process the buffer. In case of using this solution, it is neccessary to use double buffering and large enough buffers to avoid data loss.
    2. Trigger sampling from CPU interrupt. If the SAMPLE task is triggered from an interrupt with lower priority than the SAADC IRQ handler, the START task will always be triggered between an END event and a new SAMPLE task. This solution will make the sampling vary a bit in time, as higher priority tasks can cause the triggering of SAMPLE task to be delayed.

    This is a typical case of hard real-time requirements, that cannnot be guaranteed with a task/event based system. Since many users are experiencing this issue, we will try to update the documentation to make this requirement more visible.

    Best regards,

    Jørgen

  • Hi, 

    As I'm experiencing same problems, I would like to try solution #1 above.

    I can trigger START task on END event using a ppi channel, however I don't see how this can solve the problem as switching to the secondary buffer (nrfx_saadc_buffer_convert()) runs within SAADC interrupt handler.
    So DMA will be 're-armed' (on START event) in real time but it will overwrite the original buffer rather than the secondary buffer if interrupt is delayed...

    What did you mean when recommending using "...large enough buffers" ?

    me other questions:

    1. My use case is sampling 4 channels ('scan set') @ 1KHz. There are other interrupts on the system but these are extremely 'light'. How vulnerable my app is to the described problem?
    2. Will lowering sample rate to e.g. 800Hz alleviate/solve the problem ? (I assume yes but want to verify)
    3. Will setting SAADC interrupt priority to 3 (its the default, 7, at the moment) alleviate/solve the problem? I assume so but there are still SD interrupts at levels 0/1 running.

    Following the '...large enough buffers' advise, I was thinking of allocating large buffers (e.g. to hold 100 'scan sets' each - buffers[2][100*4]) and trigger START task by END event(ppi). This will enable up to 99 'lost' interrupts without loosing data. The thing is that I can't find a way to tell how many saadc 'scans' were actually written to the buffer before the interrupt was triggered (ideally 1 'scan set' but up to 99 sets in this example). RESULT.AMOUNT seems to reflect the number of samples since last START and not how many were written since setting RESULT.PTR (nrfx_saadc_buffer_convert())

    Thanks for any advise

Reply
  • Hi, 

    As I'm experiencing same problems, I would like to try solution #1 above.

    I can trigger START task on END event using a ppi channel, however I don't see how this can solve the problem as switching to the secondary buffer (nrfx_saadc_buffer_convert()) runs within SAADC interrupt handler.
    So DMA will be 're-armed' (on START event) in real time but it will overwrite the original buffer rather than the secondary buffer if interrupt is delayed...

    What did you mean when recommending using "...large enough buffers" ?

    me other questions:

    1. My use case is sampling 4 channels ('scan set') @ 1KHz. There are other interrupts on the system but these are extremely 'light'. How vulnerable my app is to the described problem?
    2. Will lowering sample rate to e.g. 800Hz alleviate/solve the problem ? (I assume yes but want to verify)
    3. Will setting SAADC interrupt priority to 3 (its the default, 7, at the moment) alleviate/solve the problem? I assume so but there are still SD interrupts at levels 0/1 running.

    Following the '...large enough buffers' advise, I was thinking of allocating large buffers (e.g. to hold 100 'scan sets' each - buffers[2][100*4]) and trigger START task by END event(ppi). This will enable up to 99 'lost' interrupts without loosing data. The thing is that I can't find a way to tell how many saadc 'scans' were actually written to the buffer before the interrupt was triggered (ideally 1 'scan set' but up to 99 sets in this example). RESULT.AMOUNT seems to reflect the number of samples since last START and not how many were written since setting RESULT.PTR (nrfx_saadc_buffer_convert())

    Thanks for any advise

Children
  • The SAADC buffer pointer is double buffered in hardware, meaning that you can set the next buffer immediately after the STARTED event is received. If you look at the implementation of nrf_drv_saadc_buffer_convert, you can see that the START task is triggered right after the first buffer is set. When you call it a second time to set the second buffer, the application will block until the STARTED event is received, before setting the second buffer. When you connect the END event to START event, the second buffer will be used without any CPU activity.

    "Large enough buffers" will depend on your application. If you have high sample rate on SAADC and heavy BLE activity, you could require large buffers to make sure there is some CPU time available to setup a new buffer before the current one is filled. Lowering sample rate and increasing priority could increase the possibility for the interrupt to be handled in time, but I would not count on it without heavily testing.

Related