Nrf5340 audio with sensor data collection.

Hallo Team,

We are planning to use nrf5340 for one of our project which requires low energy audio and sensor data transfer. Can you please evaluate if our use case is possible with existing / upcoming revision of the nrf connect SDK?

I have already tested nrf5340 audio DK. Right now I am trying to connect an external hardware audio codec to reproduce stereo audio. We need a common headphone solution. (Microphone input is also required, but however I understand that it takes some time and cannot be implemented with the existing nrf connect release ncs v 2.0.2). 

We are designing a headphone which should be able to communicate with multiple peripheral sensors apart from the smart phone (These are standalone ble sensors which transfer sensor data at 64 Hz. I am aware that we have to wait for android 13 release or higher for LE audio streaming)  So apart from audio streaming, the headphone should also be able to collect sensor data from four peripheral ble sensors. There should be separate service and characteristics available to transfer the collected sensor data to mobile phone along with audio streaming.

So we need a solution where our headphone should acts as both central (where it collects sensor data from 4 peripheral ble sensors) and peripheral (connected to mobile phone for audio streaming). The sensor data should also be transferred to mobile device along with audio streaming. Is this supported at the moment? Can this be accomplished?  

Would it be possible for an android device to act as a central device where it can collect data from four ble sensors and stream audio to the headset at the same time? Then there is no need to have extra complication (central and peripheral role) at the headset end. Would this be possible?

I would highly appreciate your suggestions and solutions.

Thank you for your time.

Best regards,

Adarsh

Parents
  • Hello Adarsh,

    Right now I am trying to connect an external hardware audio codec to reproduce stereo audio.

    Please see the attached image for an explanation of how you can use an external codec with the Audio DK.

    (Microphone input is also required, but however I understand that it takes some time and cannot be implemented with the existing nrf connect release ncs v 2.0.2). 

    You are correct that bi-directional streams are not supported by the reference application at this moment. If you have questions about the roadmap for the reference application I must ask that you reach out to your Regional Sales Director (RSD) with these questions, since we do not discuss future releases here on DevZone.
    The RSD for your region is Thomas Page, and you may reach out to him on [email protected].

    The sensor data should also be transferred to mobile device along with audio streaming. Is this supported at the moment? Can this be accomplished?  

    Yes, this can be accomplished. You can definitely use regular BLE communication alongside the audio streams - the ACLs (Asynchronous Control Logic) for LE Audio uses 'regular' BLE communication to control the audio streams, alongside the audio streams.
    The radio time available for each of the other connections will however be limited by your stream configuration (i.e how much 'free time' the radio has for maintaining the connections with the other peripherals) and so the feasibility of this will depend on the requirements for these connections - most prominently, how much data will the sensors be transmitting, how often, and what is the latency requirement for this communication?

    Would it be possible for an android device to act as a central device where it can collect data from four ble sensors and stream audio to the headset at the same time? Then there is no need to have extra complication (central and peripheral role) at the headset end. Would this be possible?

    Yes, this could be possible, and would likely reduce the complexity of your design by a lot. You would however need to have an app running on the smartphone side in order to achieve this, since the native Bluetooth behavior of the smartphone will only be to connect to audio, HID, or known sensor devices - i.e it will not be able to receive custom sensor data without an application that actively looks for and accepts data from these custom characteristics.

    Best regards,
    Karl

  • Hai Karl,

    Like always thank you for your well-structured and precise explanation. 

    Yes, this could be possible, and would likely reduce the complexity of your design by a lot.

    Most probably we would opt to design a mobile application which could help perform this task. That is to connect to all the ble sensors at a time and receive data from all of them together including the headset for audio streaming. 

    how much data will the sensors be transmitting, how often, and what is the latency requirement for this communication?

    Here few compromises can be made. At the moment the sensors transmits 64 bits of data at 64 Hz data rate. And as per the current ble audio design (in the reference application - nrf5340 audio), we are planning to transmitting 48KHz stereo audio. 

    1. However, what happens when the p0.21 pin is pulled low? Does CS4I63 uses different pins for I2S configuration? Or are the pins shared with P10 pin header?

    There are few issues which I am facing when trying to connect the stereo codec with the application. We are using da7212 from Renesas. Datasheet can be found here. I have created a driver for the sensor and replaced all the cs47I63 calls in hw_codec.c. The codec is sucessfully initialised and configured using i2c bus. However I am facing difficult to input digital audio to the codec through i2s bus.

    And hence I started to investigate audio_i2s.c file. I have noticed that the application uses BCLK: 1.536MHz, MCLK: 6.144 MHz and SYNC/WS : 48KHz.

    2. For my codec to work, I need to feed in 12.88MHz MCLK signal from nrf5340 to da7212 (da7212 slave mode). I need PLL Bypass Mode as explained in page number 51 of the datasheet. Datasheet states as follows: "MCLK must be exactly 12.288 MHz or 11.2896 MHz or a multiple thereof and synchronous with BCLK and WCLK". I also need the SYNC to be 48KHz. 

    Can you help me acheive this? Is this configuration possible? I browsed through this documentation, but I was not able to find a solution.

    Edit: 

    I am able to hear quality audio from the da7212 EVM board (through its headphone jack) with this configuration:

    MCLK: 6.144 MHz, BCLK: 1.537MHz, and SYNC: 48KHz. However, audio is audible only on the left speaker of the headphone (same as that of nrf5340 Audio dk using cs47I63). So this kept me wondering if there are any configuration changes required in the firmware to output stereo.

    I hooked up the oscilloscope to DIN and SYNC and observed that only left audio is received.

    How can I send stereo audio to both the CIS channel so I can reproduce stereo audio on both the connected headsets? 

     Are there any configurations to be altered in the reference application so as to output stereo audio?

    Here is the audio_12s.c file. I have not changed anything apart from commending out certain ifelse conditions. At the moment codec works with 6.144MHz CLK signal. If I was to generate a 12.88 MHz clcok and 48 KHz sync, what should be the configuration? Can you help me in here as well?

    #include "audio_i2s.h"
    
    #include <zephyr/kernel.h>
    #include <zephyr/device.h>
    #include <zephyr/drivers/pinctrl.h>
    #include <nrfx_i2s.h>
    #include <nrfx_clock.h>
    
    #include "audio_sync_timer.h"
    
    #include <zephyr/logging/log.h>
    LOG_MODULE_REGISTER(audio_i2s, 2);
    
    #define I2S_NL DT_NODELABEL(i2s0)
    
    #define HFCLKAUDIO_12_288_MHZ 0x9BAE
    //#define NRF_I2S_HAS_CLKCONFIG 1
    
    enum audio_i2s_state {
    	AUDIO_I2S_STATE_UNINIT,
    	AUDIO_I2S_STATE_IDLE,
    	AUDIO_I2S_STATE_STARTED,
    };
    
    static enum audio_i2s_state state = AUDIO_I2S_STATE_UNINIT;
    
    PINCTRL_DT_DEFINE(I2S_NL);
    
    /*********
     *  CONFIG_AUDIO_BIT_DEPTH_XX lines deleted here
     * ****************/
    
    static nrfx_i2s_config_t cfg = {
    	/* Pins are configured by pinctrl. */
    	.skip_gpio_cfg = true,
    	.skip_psel_cfg = true,
    	.irq_priority = DT_IRQ(I2S_NL, priority),
    	.mode = NRF_I2S_MODE_MASTER,
    	.format = NRF_I2S_FORMAT_I2S,
    	.alignment = NRF_I2S_ALIGN_LEFT,
    //#if (CONFIG_AUDIO_BIT_DEPTH_16)
    	.sample_width = NRF_I2S_SWIDTH_16BIT,
    	.mck_setup = 0x66666000,
    	.ratio = NRF_I2S_RATIO_128X,
    //#elif (CONFIG_AUDIO_BIT_DEPTH_24)
    //	.sample_width = NRF_I2S_SWIDTH_24BIT,
    	/* Clock mismatch warning: See CONFIG_AUDIO_24_BIT in KConfig */
    //	.mck_setup = 0x2BE2B000,
    //	.ratio = NRF_I2S_RATIO_48X,
    //#elif (CONFIG_AUDIO_BIT_DEPTH_32)
    //	.sample_width = NRF_I2S_SWIDTH_32BIT,
    //	.mck_setup = 0x66666000,
    //	.ratio = NRF_I2S_RATIO_128X,
    //#else
    //#error Invalid bit depth selected
    //#endif /* (CONFIG_AUDIO_BIT_DEPTH_16) */
    	.channels = NRF_I2S_CHANNELS_STEREO,
    	.clksrc = NRF_I2S_CLKSRC_ACLK,
    	.enable_bypass = false,
    };
    
    static i2s_blk_comp_callback_t i2s_blk_comp_callback;
    
    static void i2s_comp_handler(nrfx_i2s_buffers_t const *released_bufs, uint32_t status)
    {
    	if ((status == NRFX_I2S_STATUS_NEXT_BUFFERS_NEEDED) && released_bufs &&
    	    i2s_blk_comp_callback && (released_bufs->p_rx_buffer || released_bufs->p_tx_buffer)) {
    		i2s_blk_comp_callback(audio_sync_timer_i2s_frame_start_ts_get(),
    				      released_bufs->p_rx_buffer, released_bufs->p_tx_buffer);
    	}
    }
    
    void audio_i2s_set_next_buf(const uint8_t *tx_buf, uint32_t *rx_buf)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_STARTED);
    	__ASSERT_NO_MSG(rx_buf != NULL);
    #if (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET))
    	__ASSERT_NO_MSG(tx_buf != NULL);
    #endif /* (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET)) */
    
    	const nrfx_i2s_buffers_t i2s_buf = { .p_rx_buffer = rx_buf,
    					     .p_tx_buffer = (uint32_t *)tx_buf };
    
    	nrfx_err_t ret;
    
    	ret = nrfx_i2s_next_buffers_set(&i2s_buf);
    	__ASSERT_NO_MSG(ret == NRFX_SUCCESS);
    }
    
    void audio_i2s_start(const uint8_t *tx_buf, uint32_t *rx_buf)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_IDLE);
    	__ASSERT_NO_MSG(rx_buf != NULL);
    #if (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET))
    	__ASSERT_NO_MSG(tx_buf != NULL);
    #endif /* (CONFIG_STREAM_BIDIRECTIONAL || (CONFIG_AUDIO_DEV == HEADSET)) */
    
    	const nrfx_i2s_buffers_t i2s_buf = { .p_rx_buffer = rx_buf,
    					     .p_tx_buffer = (uint32_t *)tx_buf };
    
    	nrfx_err_t ret;
    
    	/* Buffer size in 32-bit words */
    	ret = nrfx_i2s_start(&i2s_buf, I2S_SAMPLES_NUM, 0);
    	__ASSERT_NO_MSG(ret == NRFX_SUCCESS);
    
    	state = AUDIO_I2S_STATE_STARTED;
    }
    
    void audio_i2s_stop(void)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_STARTED);
    
    	nrfx_i2s_stop();
    
    	state = AUDIO_I2S_STATE_IDLE;
    }
    
    void audio_i2s_blk_comp_cb_register(i2s_blk_comp_callback_t blk_comp_callback)
    {
    	i2s_blk_comp_callback = blk_comp_callback;
    }
    
    void audio_i2s_init(void)
    {
    	__ASSERT_NO_MSG(state == AUDIO_I2S_STATE_UNINIT);
    
    	nrfx_err_t ret;
    
    	nrfx_clock_hfclkaudio_config_set(HFCLKAUDIO_12_288_MHZ);
    
    	NRF_CLOCK->TASKS_HFCLKAUDIOSTART = 1;
    
    	/* Wait for ACLK to start */
    	while (!NRF_CLOCK_EVENT_HFCLKAUDIOSTARTED) {
    		k_sleep(K_MSEC(1));
    	}
    
    	ret = pinctrl_apply_state(PINCTRL_DT_DEV_CONFIG_GET(I2S_NL),
    				  PINCTRL_STATE_DEFAULT);
    	__ASSERT_NO_MSG(ret == 0);
    	//IRQ_DIRECT_CONNECT(DT_IRQN(I2S_NL), DT_IRQ(I2S_NL, priority), nrfx_isr, 0);
    	IRQ_CONNECT(DT_IRQN(I2S_NL), DT_IRQ(I2S_NL, priority), nrfx_isr, nrfx_i2s_irq_handler, 0);
    	irq_enable(DT_IRQN(I2S_NL));
    
    
    	ret = nrfx_i2s_init(&cfg, i2s_comp_handler);
    	__ASSERT_NO_MSG(ret == NRFX_SUCCESS);
    
    	state = AUDIO_I2S_STATE_IDLE;
    }
    
    

    Thanks for your time and effort.

    Best regards,

    Adarsh

  • Hai Karl, 

    1.In audio_system.c you will need to change the decode type from SW_CODEC_MONO to SW_CODEC_STEREO, which will make the headset device expect to receive a stereo stream.

    Just like the steps you mentioned for the main branch, can I have the steps for V2.0.0 / V2.0.2 modify the gateway code to send stereo audio so that it does not split data into two different stream? 

    This is where I am finding it dificult. Where exactly is the streams been split to left and right in the gateway code. For the past 1 week, I am looking in to the "bluetooth" folder inside the "src", but I cannot figure it out. 

    Can you please explain me how to/ where to modify the code in NCS V2.0.0/V2.0.2 so that the gateway sends stereo stream rather than two separate streams for left and right?

    This is not helping, since I am using NCS V2.0.0. 

    Thank you for your time. Any suggestions will be highly appreciated.

    Best regards,

    Adarsh  

  • Hy Karl,

    Patiently waiting for your reply.

    best regards,

    Adarsh

  • Hello Adarsh,

    Thank you for your extreme patience with this. I have been out of office for some time, but now I am back.

    Do you still require technical support with this issue?

    Best regards,
    Karl

  • Hallo Karl,

    Do you still require technical support with this issue?

    No. With your absence I opened up another ticket. 

    Thanks for enquiring.

    Best regards,

    Adarsh

  • Adarsh_1 said:
    With your absence I opened up another ticket. 

    Alright, thank you for updating me, I will then close this ticket.

    Best regards,
    Karl

Reply Children
No Data
Related