This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

PDM data during interrupt

Hi, 

I would like to use the PDM interface for my application in nrf52840.

I configured the driver, and started using the SDK's API successfully.

Question 1:

During the event: (p_evt->buffer_released), As part of development, I am exploring some different possibilities to handle the data buffer.

I see that if I take too long inside the function the data is corrupted. My initial suspicion was that that (p_evt->buffer_requested) was called during the time the CPU is handling  (p_evt->buffer_released) (perhaps to keep a continuous measurements) and since I always send the same buffer, it was overridden. . However, I debugged and saw that (p_evt->buffer_requested) was not started until (p_evt->buffer_released) was done, which contradicts this hypothesis.

What is causing the data to be corrupted if I spend too long inside the (p_evt->buffer_released) function - is the HW still using the pointer I supplied before to do other things?

More importantly, How much time is allowed inside (p_evt->buffer_released) before the data becomes unreliable, as I explained above.

Question 2:

In some cases, 1-2 measurements at the begging of the buffer are not good (a perfect sinus is defected at the first measurements). Is this some known stabilization issue? something else?

2 Pictures below are the recorded results for the 2 questions respectively

Thanks,

Alon Barak

Qcore medical

 

 

 

  • Hello,

    The first one looks like you have spent too long in the interrupt, so that there possibly is an interrupt that was missed, so that there actually should be (at least) one sample buffer in between.

    I have not seen this before. Are you sure that the points from your graph are correct, and not some value left in RAM from something else?

    I noticed that all your samples start with the value right below 500. Is this a coincidence? Or is the first value not part of the sample?

    Best Regards,

    Edvin

  • About the first one, can you tell how long i am allowed to spend in the interrupt?

    I plan to do no more than memcpy the buffer to a permanent place, but want to make sure the timing is OK.

    Regarding the second one, I will check what you say.

    from the nrf data:

    "

    The PDM acquisition can be started by the START task, after the SAMPLE.PTR and SAMPLE.MAXCNT registers have been written. When starting the module, it will take some time for the filters to start outputting valid data. Transients from the PDM microphone itself may also occur. The first few samples (typically around 50) might hence contain invalid values or transients. It is therefore advised to discard the first few samples after a PDM start.

    "

    The statement abofe relates to 50 samples and not buffers, correct?

    After these ~50 samples everything shoould be fine.

    P.S --> This means that if i use a 64 kB buffer i should discard the first result, correct?

    Thanks,

    Alon

  • I have never really looked too deep into the PDM peripheral, and we don't have an example, but I would guess that you have until the buffer is filled up the next time. I assume that it is double buffered, so that you can use one buffer freely while the other one is being filled. The time this takes depends on your sample rate and buffer size. You can of course try to start a timer, and check the tick count in these events (do this and nothing else) to see how frequently the interrupts occur. In that case, you should only copy the timestamp and set some flag in the interrupt. Let the main context check for the flag and print the timestamp, since the logging feature is relatively slow, compared to many of the peripherals.

     

    AlonBarak said:
    I plan to do no more than memcpy the buffer to a permanent place, but want to make sure the timing is OK.

     I understand. That sounds like a good plan.

     

    AlonBarak said:

    The statement abofe relates to 50 samples and not buffers, correct?

    After these ~50 samples everything shoould be fine.

    P.S --> This means that if i use a 64 kB buffer i should discard the first result, correct?

     I agree. That is 50 samples, not buffers. If so you don't necessarily need to discard the entire first buffer. Only the first ~50 samples in that buffer. 

    I just remembered that we have a user that shared a PDM + fatfs project (found here). Perhaps you can compare your PDM implementation with this one.

    BR,

    Edvin

  • Hi, 

    Thank for your reply.

    In my code I disragard the first 50 samples samples.

    However, I still see a problem with the first buffers - big offsets in the sinus wave - whish is not centered around 0. It becomes stable only after ~10k buffers (64 X int16_t each).

    In attached picture, you can see the result as a function of how many buffers i allowed before recording.

    This is a problem for me since i am interested in single use, and get the data as fast as possible.

    Thanks,

    Alon

  • Hello,

    Is the offset supposed to be different in all the graphs, or is that a result of ? And what is the first graph? Is it supposed to be a sinus wave as well?

    Just to be clear. The number 50-30000, is the number of samples, not buffers, right?

    And in the graph, the irregularities are only on the first 2-3 samples in that buffer. Is that correct?

    How do you trigger each of these recordings? Do you stop sampling in between each sample set, or is it a continuous stream? E.g. in the 5000 samples graph. Has it been sampling continuously for 5000 samples, or did you sample 5000 samples, stopped and then started sampling and recording?

    Is there any way for me to reproduce this on a DK? Do you have a project folder that you can ship and send that will run on a DK? Please test the project in an unmodified SDK to make sure that you didn't change anything outside the project folder before sending it.

    BR,

    Edvin

Related