BLE Audio - many to one, power consumptions of DK?

Excited to have ordered my BLE Audio DK which uses NRF5340.

In my use case I am looking for many microphones (audio transmitters) to a single listening device (audio receiver). Ideally, they'd be synchronised reasonably well but again that is something we can deal with in the back-end system. In terms of link robustness, will this work? e.g. 5 senders to one listener? 16bit, 16kHz ideal, but could be reduced if needed. 5 was a random number, even two would be useful. Can one use variable rates on each stream? And then I wasnt sure whether you'd recommend connected or broadcast - I understand connected minimises packet loss due to feedback. 

Finally, any idea of the DK / estimated power consumption for nRF5340 to send a single audio stream?

Thanks in advance,

  • Hillman007 said:
    Karl thank you for responding.

    No problem at all, I am happy to help! :) 

    Hillman007 said:
    My boards have arrived so I am going to start to implement two to one and see how we do.

    Great! Please do not hesitate to let us know if you should encounter any other issues or questions when working with this.
    I highly recommend familiarizing with the audio application specific 'buildprog' tool, by the way, since it makes building and flashing multiple boards with different versions of the same application (debug/release, gateway/headset) automated and efficient.
    You can read about the buildprog tool in the documentation, or use the python buildprog.py --help command to see its options.

    Hillman007 said:
    I will then look into power optimisitions - is there any specific areas you'd recommend looking at?

    In general, the best way to save power is to maximize the time spent in SYSTEM_ON sleep. How to best achieve this will wary for each application / use-case, but if you can reduce radio time (reduce retransmissions, increase connection parameters, etc.) you can spend more time in sleep, which should yield more time spent in SYSTEM_ON sleep.
    My recommendation would here be that you try out the unmodified nrf5340_audio application from the v1.9.99-dev1 tag, and then use those measurements as a reference for when you try to tweak the different parameters and configurations, to see their impact.

    Best regards,
    Karl

  • Yes, I stand corrected, the underlying LE Audio framework is quite flexible, and can support that use case.

    I was basing off of demonstrated application modes:

    See https://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/applications/nrf5340_audio/README.html#application-modes

    But I now understand that is not the only configurations that LE Audio supports, but rather limits of nRF Audio DK demo applications.

    As you have indicated CPU resources may be the limiting factor, if attempting to decode 5+ microphone transmitters simultaneously. This might be overcome, depending on mic audio bandwidth needs, and possible usage of other lower complexity codecs.

  • Hello,

    mtsunstrum said:

    Yes, I stand corrected, the underlying LE Audio framework is quite flexible, and can support that use case.

    I was basing off of demonstrated application modes:

    Thank you for elaborating - I see how the note in the documentation could be interpreted this way.

    mtsunstrum said:
    But I now understand that is not the only configurations that LE Audio supports, but rather limits of nRF Audio DK demo applications.

    Yes, that is to say: the nrf5340_audio application is still being developed - with new features and improvements still in the making.
    However, there are no technical limitations on neither the nRF5340 SoC nor the LE Audio specification that prevents anyone from implementing a multiple-sink scenario themselves independently, based on the nRF5340_audio application.

    mtsunstrum said:
    As you have indicated CPU resources may be the limiting factor, if attempting to decode 5+ microphone transmitters simultaneously. This might be overcome, depending on mic audio bandwidth needs, and possible usage of other lower complexity codecs.

    Yes - the flexibility of the LE Audio specification and the performance of the LC3 codec opens many possibilities for this. However, the biggest bottleneck for 5+ microphones would likely be the CPU load, but these issues could very well be alleviated by the configurations you note in your comment, or for example by relaxing the latency constraint - so long as they are feasible for the application, of course.

    Best regards,
    Karl

  • progress so far so good. I have got the buildprog chain working and tested. and had a good look over the code - in terms of its architecture. Disabled LC3 for now, whilst I get repo access. 

    So I wanted to run my approach past you:

    aim "I would like to audio senders to one audio receiver".

    approach: modify the audio applicationn sample - good?

    So this would mean building two gateways and one headset. The gateways currently have the same name. Do you think that is a problem? I couldnt figure out how to generate two individual names. It seemed CONFIG_BT_DEVICE_NAME is somewhat hard coded, and I couldnt find where the build script is pulling that value into the auto_config.h file. For example, I thought I could perhaps use the unused channel paramter in dk_devices.json. I also thought about using CONFIG_BT_DEVICE_NAME_DYNAMIC=y in the overlay. But I was unable to find a "ble set name" function... 

    To get the headset to allow two connections, I will modify the advertising behaviour to re-advertise to allow the second gateway/controller connect. And increase the maximum number of connecitons. I play to then use a button to make the audio output stream to the headphones to select between the two sent audio streams. Maybe as an extention I might add a stereo codec and have left for controller A and right for controller B streams. what do you think? 

  • Hello,

    Hillman007 said:
    approach: modify the audio applicationn sample - good?

    Yes, if you are using the buildprog tool you will need to modify the application sample directly.
    If you would like to use the traditional NCS approach you can do so as well, through using the 'generate application from sample' option in VS code. You will then have to program and flash each kit with each configuration / variation that you make, but there will be a clearer distinction between the application, and easier versioning for complex projects.

    Hillman007 said:
    So this would mean building two gateways and one headset. The gateways currently have the same name. Do you think that is a problem?

    Are you working with the CIS or the BIS mode? In case of the CIS mode then the gateways are traditional central devices, and so their name will not be advertised - it is also the centrals who initiate connections, etc.
    You should then see your headset device advertise as a traditional peripheral device, displaying its device name.
    So as long as the headset keeps advertising as connectable after the first connection it should not be any problem.

    Hillman007 said:
    To get the headset to allow two connections, I will modify the advertising behaviour to re-advertise to allow the second gateway/controller connect. And increase the maximum number of connecitons. I play to then use a button to make the audio output stream to the headphones to select between the two sent audio streams.

    This is what I too would recommend you to do. Do I understand it correctly that you would still like the device to stay in a connection and receive a stream that it is not necessarily outputting to a speaker/headphone, simultaneously as it is playing the audio from the other source?

    Hillman007 said:
    Maybe as an extention I might add a stereo codec and have left for controller A and right for controller B streams. what do you think? 

    The Audio DK can encode and stream stereo (multiple streams concurrently) audio, but the issue you would face here would be the stereo playback capabilities of the Audio DK - the audio DK is made to demonstrate / develop for the 'earbud scenario' - i.e the HEADPHONES audio jack output can only output mono channel audio. That is to say, if you plug in a stereo headset into the HEADPHONES jack you will only have audio on the left ear.
    There is however no issue for the gateway to encode and transfer multiple streams at the same time.
    To demonstrate stereo sound you would therefore have to use 2 Audio DKs as headphones, where each of them outputs one of the stereo channels.

    Please do not hesitate to ask if any part of this should be unclear, so that I may elaborate! :) 

    Best regards,
    Karl

Related