Nrf5340 audio with sensor data collection.

Hallo Team,

We are planning to use nrf5340 for one of our project which requires low energy audio and sensor data transfer. Can you please evaluate if our use case is possible with existing / upcoming revision of the nrf connect SDK?

I have already tested nrf5340 audio DK. Right now I am trying to connect an external hardware audio codec to reproduce stereo audio. We need a common headphone solution. (Microphone input is also required, but however I understand that it takes some time and cannot be implemented with the existing nrf connect release ncs v 2.0.2). 

We are designing a headphone which should be able to communicate with multiple peripheral sensors apart from the smart phone (These are standalone ble sensors which transfer sensor data at 64 Hz. I am aware that we have to wait for android 13 release or higher for LE audio streaming)  So apart from audio streaming, the headphone should also be able to collect sensor data from four peripheral ble sensors. There should be separate service and characteristics available to transfer the collected sensor data to mobile phone along with audio streaming.

So we need a solution where our headphone should acts as both central (where it collects sensor data from 4 peripheral ble sensors) and peripheral (connected to mobile phone for audio streaming). The sensor data should also be transferred to mobile device along with audio streaming. Is this supported at the moment? Can this be accomplished?  

Would it be possible for an android device to act as a central device where it can collect data from four ble sensors and stream audio to the headset at the same time? Then there is no need to have extra complication (central and peripheral role) at the headset end. Would this be possible?

I would highly appreciate your suggestions and solutions.

Thank you for your time.

Best regards,

Adarsh

Parents
  • Hello Adarsh,

    Right now I am trying to connect an external hardware audio codec to reproduce stereo audio.

    Please see the attached image for an explanation of how you can use an external codec with the Audio DK.

    (Microphone input is also required, but however I understand that it takes some time and cannot be implemented with the existing nrf connect release ncs v 2.0.2). 

    You are correct that bi-directional streams are not supported by the reference application at this moment. If you have questions about the roadmap for the reference application I must ask that you reach out to your Regional Sales Director (RSD) with these questions, since we do not discuss future releases here on DevZone.
    The RSD for your region is Thomas Page, and you may reach out to him on [email protected].

    The sensor data should also be transferred to mobile device along with audio streaming. Is this supported at the moment? Can this be accomplished?  

    Yes, this can be accomplished. You can definitely use regular BLE communication alongside the audio streams - the ACLs (Asynchronous Control Logic) for LE Audio uses 'regular' BLE communication to control the audio streams, alongside the audio streams.
    The radio time available for each of the other connections will however be limited by your stream configuration (i.e how much 'free time' the radio has for maintaining the connections with the other peripherals) and so the feasibility of this will depend on the requirements for these connections - most prominently, how much data will the sensors be transmitting, how often, and what is the latency requirement for this communication?

    Would it be possible for an android device to act as a central device where it can collect data from four ble sensors and stream audio to the headset at the same time? Then there is no need to have extra complication (central and peripheral role) at the headset end. Would this be possible?

    Yes, this could be possible, and would likely reduce the complexity of your design by a lot. You would however need to have an app running on the smartphone side in order to achieve this, since the native Bluetooth behavior of the smartphone will only be to connect to audio, HID, or known sensor devices - i.e it will not be able to receive custom sensor data without an application that actively looks for and accepts data from these custom characteristics.

    Best regards,
    Karl

  • Hai Karl,

    Like always thank you for your well-structured and precise explanation. 

    Yes, this could be possible, and would likely reduce the complexity of your design by a lot.

    Most probably we would opt to design a mobile application which could help perform this task. That is to connect to all the ble sensors at a time and receive data from all of them together including the headset for audio streaming. 

    how much data will the sensors be transmitting, how often, and what is the latency requirement for this communication?

    Here few compromises can be made. At the moment the sensors transmits 64 bits of data at 64 Hz data rate. And as per the current ble audio design (in the reference application - nrf5340 audio), we are planning to transmitting 48KHz stereo audio. 

    1. However, what happens when the p0.21 pin is pulled low? Does CS4I63 uses different pins for I2S configuration? Or are the pins shared with P10 pin header?

    There are few issues which I am facing when trying to connect the stereo codec with the application. We are using da7212 from Renesas. Datasheet can be found here. I have created a driver for the sensor and replaced all the cs47I63 calls in hw_codec.c. The codec is sucessfully initialised and configured using i2c bus. However I am facing difficult to input digital audio to the codec through i2s bus.

    And hence I started to investigate audio_i2s.c file. I have noticed that the application uses BCLK: 1.536MHz, MCLK: 6.144 MHz and SYNC/WS : 48KHz.

    2. For my codec to work, I need to feed in 12.88MHz MCLK signal from nrf5340 to da7212 (da7212 slave mode). I need PLL Bypass Mode as explained in page number 51 of the datasheet. Datasheet states as follows: "MCLK must be exactly 12.288 MHz or 11.2896 MHz or a multiple thereof and synchronous with BCLK and WCLK". I also need the SYNC to be 48KHz. 

    Can you help me acheive this? Is this configuration possible? I browsed through this documentation, but I was not able to find a solution.

    Edit: 

    I am able to hear quality audio from the da7212 EVM board (through its headphone jack) with this configuration:

    MCLK: 6.144 MHz, BCLK: 1.537MHz, and SYNC: 48KHz. However, audio is audible only on the left speaker of the headphone (same as that of nrf5340 Audio dk using cs47I63). So this kept me wondering if there are any configuration changes required in the firmware to output stereo.

    I hooked up the oscilloscope to DIN and SYNC and observed that only left audio is received.

    How can I send stereo audio to both the CIS channel so I can reproduce stereo audio on both the connected headsets? 

     Are there any configurations to be altered in the reference application so as to output stereo audio?

    Here is the audio_12s.c file. I have not changed anything apart from commending out certain ifelse conditions. At the moment codec works with 6.144MHz CLK signal. If I was to generate a 12.88 MHz clcok and 48 KHz sync, what should be the configuration? Can you help me in here as well?

    #include "audio_i2s.h"
    
    #include <zephyr/kernel.h>
    #include <zephyr/device.h>
    #include <zephyr/drivers/pinctrl.h>
    #include <nrfx_i2s.h>
    #include <nrfx_clock.h>
    
    #include "audio_sync_timer.h"
    
    #include <zephyr/logging/log.h>
    LOG_MODULE_REGISTER(audio_i2s, 2);
    
    #define I2S_NL DT_NODELABEL(i2s0)
    
    #define HFCLKAUDIO_12_288_MHZ 0x9BAE
    //#define NRF_I2S_HAS_CLKCONFIG 1
    
    enum audio_i2s_state {
    	AUDIO_I2S_STATE_UNINIT,
    	AUDIO_I2S_STATE_IDLE,
    	AUDIO_I2S_STATE_STARTED,
    };
    
    static enum audio_i2s_state state = AUDIO_I2S_STATE_UNINIT;
    
    PINCTRL_DT_DEFINE(I2S_NL);
    
    /*********
     *  CONFIG_AUDIO_BIT_DEPTH_XX lines deleted here
     * ****************/
    
    static nrfx_i2s_config_t cfg = {
    	/* Pins are configured by pinctrl. */
    	.skip_gpio_cfg = true,
    	.skip_psel_cfg = true,
    	.irq_priority = DT_IRQ(I2S_NL, priority),
    	.mode = NRF_I2S_MODE_MASTER,
    	.format = NRF_I2S_FORMAT_I2S,
    	.alignment = NRF_I2S_ALIGN_LEFT,
    //#if (CONFIG_AUDIO_BIT_DEPTH_16)
    	.sample_width = NRF_I2S_SWIDTH_16BIT,
    	.mck_setup = 0x66666000,
    	.ratio = NRF_I2S_RATIO_128X,
    //#elif (CONFIG_AUDIO_BIT_DEPTH_24)
    //	.sample_width = NRF_I2S_SWIDTH_24BIT,
    	/* Clock mismatch warning: See CONFIG_AUDIO_24_BIT in KConfig */
    //	.mck_setup = 0x2BE2B000,
    //	.ratio = NRF_I2S_RATIO_48X,
    //#elif (CONFIG_AUDIO_BIT_DEPTH_32)
    //	.sample_width = NRF_I2S_SWIDTH_32BIT,
    //	.mck_setup = 0x66666000,
    //	.ratio = NRF_I2S_RATIO_128X,
    //#else
    //#error Invalid bit depth selected
    //#endif /* (CONFIG_AUDIO_BIT_DEPTH_16) */
    	.channels = NRF_I2S_CHANNELS_STEREO,
    	.clksrc = NRF_I2S_CLKSRC_ACLK,
    	.enable_bypass = false,
    };
    
    static i2s_blk_comp_callback_t i2s_blk_comp_callback;
    
    static void i2s_comp_handler(nrfx_i2s_buffers_t const *released_bufs, uint32_t status)
    {
    	if ((status == NRFX_I2S_STATUS_NEXT_BUFFERS_NEEDED) && released_bufs &&
    	    i2s_blk_comp_callback && (released_bufs->p_rx_buffer || released_bufs->p_tx_buffer)) {
    		i2s_blk_comp_callback(audio_sync_timer_i2s_frame_start_ts_get(),
    				      released_bufs->p_rx_buffer, released_bufs->p_tx_buffer);
    	}
    }
    
    void audio_i2s_set_next_buf(const uint8_t *tx_buf, uint32_t *rx_buf)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_STARTED);
    	__ASSERT_NO_MSG(rx_buf != NULL);
    #if (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET))
    	__ASSERT_NO_MSG(tx_buf != NULL);
    #endif /* (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET)) */
    
    	const nrfx_i2s_buffers_t i2s_buf = { .p_rx_buffer = rx_buf,
    					     .p_tx_buffer = (uint32_t *)tx_buf };
    
    	nrfx_err_t ret;
    
    	ret = nrfx_i2s_next_buffers_set(&i2s_buf);
    	__ASSERT_NO_MSG(ret == NRFX_SUCCESS);
    }
    
    void audio_i2s_start(const uint8_t *tx_buf, uint32_t *rx_buf)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_IDLE);
    	__ASSERT_NO_MSG(rx_buf != NULL);
    #if (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET))
    	__ASSERT_NO_MSG(tx_buf != NULL);
    #endif /* (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET)) */
    
    	const nrfx_i2s_buffers_t i2s_buf = { .p_rx_buffer = rx_buf,
    					     .p_tx_buffer = (uint32_t *)tx_buf };
    
    	nrfx_err_t ret;
    
    	/* Buffer size in 32-bit words */
    	ret = nrfx_i2s_start(&i2s_buf, I2S_SAMPLES_NUM, 0);
    	__ASSERT_NO_MSG(ret == NRFX_SUCCESS);
    
    	state = AUDIO_I2S_STATE_STARTED;
    }
    
    void audio_i2s_stop(void)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_STARTED);
    
    	nrfx_i2s_stop();
    
    	state = AUDIO_I2S_STATE_IDLE;
    }
    
    void audio_i2s_blk_comp_cb_register(i2s_blk_comp_callback_t blk_comp_callback)
    {
    	i2s_blk_comp_callback = blk_comp_callback;
    }
    
    void audio_i2s_init(void)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_UNINIT);
    
    	nrfx_err_t ret;
    
    	nrfx_clock_hfclkaudio_config_set(HFCLKAUDIO_12_288_MHZ);
    
    	NRF_CLOCK->TASKS_HFCLKAUDIOSTART = 1;
    
    	/* Wait for ACLK to start */
    	while (!NRF_CLOCK_EVENT_HFCLKAUDIOSTARTED) {
    		k_sleep(K_MSEC(1));
    	}
    
    	ret = pinctrl_apply_state(PINCTRL_DT_DEV_CONFIG_GET(I2S_NL),
    				  PINCTRL_STATE_DEFAULT);
    	__ASSERT_NO_MSG(ret == 0);
    	//IRQ_DIRECT_CONNECT(DT_IRQN(I2S_NL), DT_IRQ(I2S_NL, priority), nrfx_isr, 0);
    	IRQ_CONNECT(DT_IRQN(I2S_NL), DT_IRQ(I2S_NL, priority), nrfx_isr, nrfx_i2s_irq_handler, 0);
    	irq_enable(DT_IRQN(I2S_NL));
    
    
    	ret = nrfx_i2s_init(&cfg, i2s_comp_handler);
    	__ASSERT_NO_MSG(ret == NRFX_SUCCESS);
    
    	state = AUDIO_I2S_STATE_IDLE;
    }
    
    

    Thanks for your time and effort.

    Best regards,

    Adarsh

  • Hello again,

    Thank you for your patience with this.

    Adarsh_1 said:
    Like always thank you for your well-structured and precise explanation. 

    No problem, Adarsh - I am happy to help! :) 

    Adarsh_1 said:
    I am able to hear quality audio from the da7212 EVM board (through its headphone jack) with this configuration:

    I am glad to read that you were able to get the nRF5340 Audio DK up and running with your external codec - great!

    Adarsh_1 said:
    I hooked up the oscilloscope to DIN and SYNC and observed that only left audio is received.

    This is as expected, since the nRF5340_Audio reference application is made specifically for the nRF5340 Audio DK, and so the default headset configuration only listens to one of the two transmitted stereo channels (either the LEFT or RIGHT, depending on the build configuration for the specific headset device).

    The default CIS Gateway device will set up two channels, one for left and one for right, and then each of the headset DKs choose which they would like to listen to. In your case, you would like to receive both channels, so here you could either have configured your headset device to accept both streams, and merge the streams before outputting - but this would require some work to make the timing just right.
    Instead, you could modify the gateway to send the stereo data in a single stream, and modify the headset device to receive/decode a stereo stream.

    The following is an explanation for working with the current ncs main branch (since it now uses BAP, which grants interoperability with other LE Audio devices):

    1.In audio_system.c you will need to change the decode type from SW_CODEC_MONO to SW_CODEC_STEREO, which will make the headset device expect to receive a stereo stream.

    2.You will also have to modify the gateway behavior, so that it does not split up the data into two separate streams in the le_audio_send function.

    3.Lastly, you would also have to change the appearance in PACS to lets other devices know that the headset supports receiving stereo, by changing the CHANNEL_COUNT_1 to CHANNEL_COUNT_2.

    Give this a try, and let me know if you should encounter any issues or questions! :) 

    Best regards,
    Karl

  • Hallo Karl,

    Thank you for your suggestions.

    Instead, you could modify the gateway to send the stereo data in a single stream, and modify the headset device to receive/decode a stereo stream.

    Yes, this if exactly what I need. 

    However, here I am facing difficulties. How can i modify the "le_audio_send" function to to send stereo data?  Should I create a buffer here to load the audio samples? Or can I use the iso_stream_send function to send stereo audio without splitting the data?

    Can you show an example / code snippets on how this can be achieved? 

    best regards,

    Adarsh

  • Hai Karl,

    While working with the steps you provided I found out that the sdk no more supports SBC codec. 

    I do not intend to use LC3 at the moment. And need to test with SBC alone due to internal reasons. 

    So is it possible to work with stereo audio using just the SBC codec? Can I perform changes in the ncsV2.0.0 Tag to let the gateway send stereo in a single CIS using SBC codec?

    Best regards,

    Adarsh 

Reply
  • Hai Karl,

    While working with the steps you provided I found out that the sdk no more supports SBC codec. 

    I do not intend to use LC3 at the moment. And need to test with SBC alone due to internal reasons. 

    So is it possible to work with stereo audio using just the SBC codec? Can I perform changes in the ncsV2.0.0 Tag to let the gateway send stereo in a single CIS using SBC codec?

    Best regards,

    Adarsh 

Children
  • Hello again, Adarsh

    Adarsh_1 said:

    However, here I am facing difficulties. How can i modify the "le_audio_send" function to to send stereo data?  Should I create a buffer here to load the audio samples? Or can I use the iso_stream_send function to send stereo audio without splitting the data?

    Can you show an example / code snippets on how this can be achieved? 

    Are you still facing difficulties with this? You should here modify the le_audio_send function so that it sends stereo data (by changing its current behavior, which is to split the data into two streams).

    Adarsh_1 said:
    While working with the steps you provided I found out that the sdk no more supports SBC codec. 

    This is only the case for the main branch - it is not yet effective for the v2.0.2 nRF Connect SDK that you are working with.

    Adarsh_1 said:
    I do not intend to use LC3 at the moment. And need to test with SBC alone due to internal reasons. 

    I understand. If you stick to using the v2.0.2 you will still be able to use the SBC, no problem.
    If you are to upgrade to future releases you will need to add the non-LC3 codec yourself, in the case that you still will not be working with the LC3.
    Please know that we will soon be making changes to the LC3 agreement, so that the evaluation agreement no longer is necessary in order to be able to evaluate the LC3 (I mention this in case this is relevant for your reasoning to keep with the SBC codec).

    Adarsh_1 said:
    So is it possible to work with stereo audio using just the SBC codec? Can I perform changes in the ncsV2.0.0 Tag to let the gateway send stereo in a single CIS using SBC codec?

    The CIS does not check the contents of the stream - it does not know if the stream is encoded using LC3, SBC, or any other codec, so long as it fits into the Isochronous channel, with size being the main parameter.

    Best regards,
    Karl

  • Hai Karl,

    Are you still facing difficulties with this? You should here modify the le_audio_send function so that it sends stereo data (by changing its current behavior, which is to split the data into two streams).

    Yes. I have not yet made it to work. I should have mentioned about the use of SBC before. Anyway now you know that I am working with NCS V2.0.0 and testing with SBC at the moment. I can see that the master branch has added and changed few files. 

    I was not able to find an equivalent function for le_audio_send in the NCS V2.0.0. How do I proceed from here? I am not able to get my head arround this properly.

    Where are the equivalent codes / files situvated in NCS V2.0.0. I am sorry I was not able to find it.

    Can you provide a bit more information here?

    Thank you for your time.

    Best regards,

    Adarsh

  • Hai Karl, 

    1.In audio_system.c you will need to change the decode type from SW_CODEC_MONO to SW_CODEC_STEREO, which will make the headset device expect to receive a stereo stream.

    Just like the steps you mentioned for the main branch, can I have the steps for V2.0.0 / V2.0.2 modify the gateway code to send stereo audio so that it does not split data into two different stream? 

    This is where I am finding it dificult. Where exactly is the streams been split to left and right in the gateway code. For the past 1 week, I am looking in to the "bluetooth" folder inside the "src", but I cannot figure it out. 

    Can you please explain me how to/ where to modify the code in NCS V2.0.0/V2.0.2 so that the gateway sends stereo stream rather than two separate streams for left and right?

    This is not helping, since I am using NCS V2.0.0. 

    Thank you for your time. Any suggestions will be highly appreciated.

    Best regards,

    Adarsh  

  • Hy Karl,

    Patiently waiting for your reply.

    best regards,

    Adarsh

  • Hello Adarsh,

    Thank you for your extreme patience with this. I have been out of office for some time, but now I am back.

    Do you still require technical support with this issue?

    Best regards,
    Karl

Related