Zephyr I2S: Next buffers not supplied on time

Hello,

I am setting up a Zephyr project using custom hardware with the nRF52840 microcontroller. I am working from the Echo example (ncs/v2.5.0/zephyr/samples/drivers/i2s/echo/src/main.c), however, my application is TX direction only. My audio code is running in its own thread. I am setting up control signals (Start/stop audio, select audio to play) from separate threads and passing them in using a message queue.

At this point, I am able to pass in a Play Audio signal, and I can see that I2S clocks start operating and clock out data. Then, I get an error in my logs: "<err> i2s_nrfx: Next buffers not supplied on time".

I want my audio thread to supply a block of audio data to the I2S peripheral, then yield so other threads can run. I would like to have my audio thread wake up when the I2S peripheral signals that the next buffers are needed. I've found that the old nrfx SDK has a signal for this: NRFX_I2S_STATUS_NEXT_BUFFERS_NEEDED. However, I don't see any mention of this in the Zephyr I2S Echo example.

How do I detect the NRFX_I2S_STATUS_NEXT_BUFFERS_NEEDED in Zephyr with the new SDK?

Or, if this is not the right signal for Zephyr, how do I detect that I need to send the I2S peripheral the next buffers, and wake up my audio thread to execute that work?

Parents
  • Hi Chris

    The operation of the i2s driver in Zephyr is quite different to that of the nrfx driver. The nrfx driver is designed with a more bare metal approach in mind, and is based on asynchronous function calls and events. The i2s_driver on the other hand is designed to be used with the Zephyr RTOS, and uses a synchronous/blocking approach.

    Ideally you should be able to simply call the i2s_write(..) function in a loop from your audio thread, and whenever the buffers are full the call will simply be delayed until there is more room in the buffer, and at the same time the calling thread will yield so that other threads can run in the mean time. 

    There is also a timeout parameter in the driver that allows you to configure a timeout on the write or read calls, making the function exit if there is no room in the buffer before the timeout occurs. 

    For more details please read the i2s_write(..) documentation

    Best regards
    Torbjørn

Reply
  • Hi Chris

    The operation of the i2s driver in Zephyr is quite different to that of the nrfx driver. The nrfx driver is designed with a more bare metal approach in mind, and is based on asynchronous function calls and events. The i2s_driver on the other hand is designed to be used with the Zephyr RTOS, and uses a synchronous/blocking approach.

    Ideally you should be able to simply call the i2s_write(..) function in a loop from your audio thread, and whenever the buffers are full the call will simply be delayed until there is more room in the buffer, and at the same time the calling thread will yield so that other threads can run in the mean time. 

    There is also a timeout parameter in the driver that allows you to configure a timeout on the write or read calls, making the function exit if there is no room in the buffer before the timeout occurs. 

    For more details please read the i2s_write(..) documentation

    Best regards
    Torbjørn

Children
  • Hi  , thanks for the timely response.

    I am trying to keep my total number of threads to a minimum in order to avoid the overhead of having separate stacks for each thread. I'd like my audio thread to be able to do other work while waiting for the I2S buffer to free up space.

    If the Zephyr RTOS I2S driver is intended to use a blocking approach, then won't this prevent any other work from happening in the thread that is managing the I2S interface?

  • Hi Chris

    My reply was probably a bit imprecise. 

    Any API that provides a timeout parameter can be used both in a blocking or a more asynchronous manner. To make an API call non blocking simply set the timeout to 0, and ensure you check the return value to see if the call timed out or not. 

    For the I2S driver you define the timeout statically in the config, rather than provide it as an argument to every function call (like for many other API's), but the basic principle in the same. In other words you can share the thread with other functions if you just make sure to use a small enough timeout. 

    As for stack usage and threads keep in mind that each thread is separately configured, so the overall stack usage is not necessarily any larger when using multiple threads since a smaller thread doing less work would probably require a smaller stack, compared to a larger thread performing multiple tasks. 

    Best regards
    Torbjørn

  • Okay, thank you for the clarification. So as long as my audio thread is checking the state of the I2S buffer often enough that the buffer does not dry up, I should be fine to do other work in my audio thread. Is that correct?

    -------------

    I have updated my logic to attempt to write data to the I2S buffer periodically.
    My audio blocks are 200ms long.
    I am having my audio thread attempt to write data to the I2S buffer every 100ms, because I want to make sure that the audio buffer does not dry up.

    With this setup, I have found that my audio sounds good (no playback gaps), but I am regularly getting this error:

              audio_task: Failed to allocate TX block 1: -12

    According to the Zephyr error number table, the -12 error code translates into "ENOMEM: Not enough core." I am seeing this error about every other time that I call my function to add data to the I2S buffer.

    I think this all makes sense: I'm trying to add a block to the I2S buffer every 100ms, but the I2S peripheral is only removing a block every 200ms. So the I2S buffer fills up, and every other time I attempt to write a block to the buffer, I get an error because there isn't room for the new block.

    I think this is fine - my thread attempts to add data to the I2S buffer, and if there is no room, it waits another 100ms and tries again.

    Ideally, I would only attempt to write a block of data when there is a space to add it. I could change my 'Add to buffer' frequency to be 200ms as well, which should reduce the frequency of seeing this error. But I think, unless the timing is PERFECT between the buffer consumer and producer, I will always get some kind of buffer error:

    • If I add data to the I2S buffer faster than the data is being clocked out by the I2S peripheral, then eventually I will attempt to add data to the buffer when there isn't room in the buffer, generating the "Failed to allocate TX block" error. (Acceptable, but the error is annoying)
    • If I add data to the I2S buffer slower than the data is being clocked out, then eventually the buffer will dry up and I will have a gap in the audio playback. (Unacceptable)

    I think the only way to get the timing perfectly in sync between the consumer and producer would be to use some kind of feedback from the I2S peripheral (like NEXT_BUFFERS_NEEDED) to signal that there is room for the next block. Any blind timer writing data to the I2S buffer will eventually have one of the two issues I listed, I believe.

    Can I safely ignore this "Failed to allocate TX block" error? Or is there a better approach I should use that would not generate this error?

  • Hi Chris

    It makes sense that you get the ENOMEM error if you try to read more often that data is being produced, yes. You should make sure that your code checks for this error and handles it accordingly. 

    If you can not afford to use a dedicated thread to tread to handle the I2S driver then I don't think there is any way to get perfect synchronization between the consumer and producer, since there is no way to define a callback in the driver. 

    If you were to use the nrfx_i2s driver directly then you can do it this way, and use the callback from the driver to synchronize everything, but then you also have to work with an API which is quite a bit more low level. 

    Best regards
    Torbjørn

Related