I2S help on nRF5340 Audio DK

Hi,

For my university masters group project I need to connect my nRF5340 Audio DK to another board (Ambiq Apollo 510B). I will need to send audio from the mic to the Apollo via I2S on the GPIO pins. The apollo will then send back processed audio via I2S. This will be output to the Nordic's headphone jack.

I have a basic understanding of nRF connect for VS code and I have got the nRF5340 audio applications working streaming bidirectional audio over bluetooth. 

My question is what is the best way to go about this? Is there some sample code I can modify? Can I use the nRF5340 audio applications and disable bluetooth? Should I start from scratch? Is there a course I can follow on I2S? The project ends in 2 weeks, is if feasible to get this working in that timeframe?

Many thanks,

Lawrence

Parents
  • Hi,

    For your setup there is one important limitation on the nRF5340 Audio DK that the board only has one I2S peripheral (i2s0). This I2S is normally used to communicate with the on-board codec (CS47L63), which provides both the microphone input and the headphone output. If you instead route I2S to the external header (P10) to talk to the Apollo, the codec path is typically bypassed, so you cannot use a second simultaneous I2S link back to the headphone jack.

    In other words, the mic to Apollo over I2S and back over I2S to Audio DK headphone architecture is not directly supported as two parallel I2S connections. The practical options are either use I2S on P10 to/from Apollo and do audio output on the Apollo side, or keep the standard Audio DK codec path for mic and headphone and exchange PCM with Apollo over another interface (like SPI), which is usually the simpler way to keep the headphone jack working.

    And as far as sample code is concerned so the nRF5340 Audio application already contains a complete audio pipeline (mic, codec, buffering), and you can reuse it and disable the Bluetooth part. For your project, the most practical approach is maybe to keep the normal Audio DK mic and headphone path (nRF5340 to CS47L63 over I2S) and exchange PCM audio with the Apollo over another interface such as SPI. This avoids rerouting I2S and is typically simpler and more robust, though it requires some custom code (as there is no ready made PCM over SPI example).

    Best Regards,
    Syed Maysum

  • Thanks! We have decided to go for this approach now: stream mic audio from the nRF5340 audio DK to the apollo via SPI. The apollo will then send back processed audio via SPI. This processed audio will be output to the headphone jack on the nRF5340.

    Is there any example code for the nRF5340 that can do this or something similar?

Reply Children
  • Hi,

    A good starting point could be to use nrf5340_audio/unicast_server (headset). That application already implements the full local audio path on the nRF5340 Audio DK, microphone input, buffering, and playback through the on-board CS47L63 codec to the headphone jack.

    There is no Nordic out of the box example where audio is sent over the SPI to another chip and back, however you could use the Zephyr's SPI API for it.

    Best Regards,
    Syed Maysum

  • Thanks, 

    For SPI, I have been able to set up my audio dk as a master (with the help of this code https://github.com/too1/ncs-spi-master-slave-example) and it sends a 2 byte transaction every second.

    I have had a look at the unicast server application and I'm struggling to find the code that I need due to its complexity. I have also had a look at the documentation. Could you point me to the specific part of the code that takes the mic input? and the part that sends it to the headphone jack?

    Also, would you recommend that I create my application using my SPI example as a base and adding the mic input and headphone output into that code. Or would you recommend modifying the unicast server application to add SPI?

    Finally, do you think its feasible for me to get this working within the next 10 days? That is when my university project ends.

    Thanks

    Lawrence

  • Hi,

    We recommend starting from your working SPI example and adding the audio parts into it, rather than starting from unicast_server. The unicast_server app is complex because it includes a full Bluetooth LE Audio stack which you don't need.

    Mic input: The microphone data is captured in audio_datapath.c, function audio_datapath_i2s_blk_complete(). This is called automatically every 1ms when a new block of raw audio (PCM) from the mic is ready. It puts that audio into a queue, which is then picked up by encoder_thread() in audio_system.c. This is the exact place where you replace the BLE send with your SPI send to Apollo.

    Headphone output: Call hw_codec_default_conf_enable() from hw_codec.c once at startup to initialize the on-board CS47L63 codec chip. Then, when you receive processed PCM back from Apollo over SPI, feed it into audio_datapath_stream_out() in audio_datapath.c, this will automatically play it out through the headphone jack.

    You will also need to bring in audio_i2s.c and cs47l63_comm.c as these are required dependencies of the files above and Yes, it should be feasible if you keep the scope simple.You already have SPI working which is a great start, so focus next on getting the codec initialized and audio playing through the headphone jack before wiring in the SPI audio transfer.

    Best Regards,
    Syed Maysum

Related