Building Audio Broadcast SDK example in VS Code

I've been trying to build the BLE Audio Broadcast source to run on my nRF5340 Audio DKs using the line in as the audio source. I've been developing on nRF52 for a while using SDK 17.1, but VS Code / Zephyr is new to me.

Nordic's training material uses VS Code "copy a sample". But as I found out the hard way, these samples are from Zephyr and not quite the same as the Nordic examples bundled with the SDK, specifically the VS sample project doesn't include I2S. Also, VS Code UI has changed since the training videos which doesn't help.

The SDK application now build & programs using the Python build script, but I would like to get this working in VS Code.

So I'm attempting to get the SDK app running in VS Code which leads to my first question: How does one build & program for the app core and net core?

I see conflicting information such as Nordic docs 

"The SoftDevice Controller is an RTOS-agnostic library built for the Nordic Semiconductor devices that support Bluetooth.

For the nRF53 Series, the requirements described in this document are only relevant for applications running alongside the SoftDevice Controller on the network processor. For the nRF54H Series, some peripherals in the global domain are reserved so the requirements described here are relevant for all processors."

and Support case

"The nRF Connect SDK is quite different from nRF5SDK. There is no separate "SoftDevice" for the nRF Connect. nRF Connect SDK is based on the Zephyr RTOS and you could go through the Tutorial Series here. This will help you understand the way a project flows in the nRF Connect SDK. 

The programmer app is mainly used to flash the .hex files of projects that you build, onto the board. You should also go through the Programmer Guide.

Regards,

Priyanka"

I'm hoping this is buried in the configurations, I really don't want to be dropping out to a command shell to be building & flashing boards.

Parents
  • Hi,

    I will try to clear up some of the confusion.

    In nRF5 SDK, we had a separate Bluetooth stack, named "SoftDevice", which was delivered as a separate binary.

    in nRF Connect SDK, the Bluetooth stack is delivered in two parts: Host subsystem (Zephyr, provided as source code), and the SoftDevice Controller subsystem (based on the lower layers of the SoftDevice, provided as binary, specifically in nRF Connect SDK).

    Nordic's training material uses VS Code "copy a sample". But as I found out the hard way, these samples are from Zephyr and not quite the same as the Nordic examples bundled with the SDK, specifically the VS sample project doesn't include I2S. Also, VS Code UI has changed since the training videos which doesn't help.

    There are samples both under the zephyr\samples folder and under the nrf\samples folders in the SDK. The former is part of Zephyr, and the latter is specific to nRF Connect SDK. Still, the Zephyr samples support nRF SoCs and boards. The approach of "copy a sample" is still valid, and as with any such approach you may need functionality from other samples (both zephyr and nrf).

    For Bluetooth LE Audio, nRF Connect SDK specific samples are filed not under samples, but under applications. Applications typically include fully integrated software stacks that are better suited for starting point than what samples are. See nRF5340 Audio applications for full documentation on Bluetooth LE Audio application samples in nRF Connect SDK.

    Regarding training, we have updated course material on https://academy.nordicsemi.com/ covering both nRF Connect SDK Fundamentals and more intermediate nRF Connect SDK concepts, as well as other topics.

    Regards,
    Terje

  • There are samples both under the zephyr\samples folder and under the nrf\samples folders in the SDK. The former is part of Zephyr, and the latter is specific to nRF Connect SDK. Still, the Zephyr samples support nRF SoCs and boards. The approach of "copy a sample" is still valid, and as with any such approach you may need functionality from other samples (both zephyr and nrf).

    For Bluetooth LE Audio, nRF Connect SDK specific samples are filed not under samples, but under applications. Applications typically include fully integrated software stacks that are better suited for starting point than what samples are. See nRF5340 Audio applications for full documentation on Bluetooth LE Audio application samples in nRF Connect SDK.

    I already found that out :( Specifically I am only looking at the project bap_broadcast_source. I tried building the sample in vscode after going through the LED Blinky rite of passage but quickly found out it doesn't include support for I2S which is essential to my application.

    The \nrf\applications\ project does support I2S, but isn't supported using vs code according to the docs, only a Python or Command Line build.

    So I have 2 choices: Either try to import the nrf application into vscode, or add I2S code/config/etc into the zephr sample. Which is best (for somebody unfamiliar with the structure of nRF Connect & Zephr)?

    in nRF Connect SDK, the Bluetooth stack is delivered in two parts: Host subsystem (Zephyr, provided as source code), and the SoftDevice Controller subsystem (based on the lower layers of the SoftDevice, provided as binary, specifically in nRF Connect SDK).

    OK that's clear enough thanks. But how does that relate to generating a build? Using nRF5340DK in vscode I have the choice of cpuapp or cpunet. Obviously my user application goes to cpuapp but the flashing process erases the device so the cpunet needs programming. (this is where I am at right now trying to get the nrf app running in vscode)

  • Hello,

    Nick_RA said:
    The \nrf\applications\ project does support I2S, but isn't supported using vs code according to the docs, only a Python or Command Line build.

    It is not documented, but building with the VS Code extension is actually possible. So you can follow the application creation flow you are used to and use the nRF5340 Audio application as a starting point. Refer to the instructions for building with command line to see which CMake arguments you should add if you don't want to edit the prj.conf before making a build configuration. Building the application and Building and running for the Broadcast source are good places to look.

    If you find any issues when trying to use nRF Connect for VS Code to build the Audio application, please share them with us.

    Nick_RA said:
    So I have 2 choices: Either try to import the nrf application into vscode, or add I2S code/config/etc into the zephr sample. Which is best (for somebody unfamiliar with the structure of nRF Connect & Zephr)?

    In regards of what is best, it depends on what your goal is: An application which is functional from the start or a more minimal project. The Audio application is quite large, so if you don't need many of the features in it, it could be too large for your project.

    If you do want a more minimal project, I recommend that you go through our DevAcademy course(s) which are created to get familiar with nRF Connect SDK. The nRF Connect SDK Fundamentals course is my recommended start, and you can expand with nRF Connect SDK Intermediate and Bluetooth Low Energy Fundamentals.

    In terms of I2S samples, there are three in nRF Connect SDK (Zephyr): I2S codec, echo and output.

    Nick_RA said:
    But how does that relate to generating a build? Using nRF5340DK in vscode I have the choice of cpuapp or cpunet. Obviously my user application goes to cpuapp but the flashing process erases the device so the cpunet needs programming.

    The Audio application is not supported for the nRF5340DK out of the box. You mention in your original ticket that you have the nRF5340 Audio DK, so I will assume that you are using that for your LE Audio project.

    About the programming, when using the app core of the nRF5340 SoC as the build target for applications wich enables Bluetooth, the build system will build an image for the network core as well. For the Audio application this image is the IPC radio firmware. When using the nRF Connect for VS Code extension, both cores will be programmed if the main build directory is selected.

    Best regards,

    Maria

  • Hi Maria, thanks for your reply. 

    It is not documented, but building with the VS Code extension is actually possible.

    OK, good, I will pursue that. Before I was working within the parameters of what I have read both from docs and previous (not very good advice) from a previous support ticket

    In regards of what is best, it depends on what your goal is:

    Multi-faceted. Firstly I need to get a working example of BLE broadcast source [done, from Python build example]. Next, I need to add some basic features e.g. UART CLI to start/stop, change name. Maybe add a custom GATT service. Yes, I purchased a couple of nRF5340DK for the eval/demo. If we decide to develop further then I will move to a custom PCB.

    Also, my goal is to learn the nRF Connect / VS code ecosystem. I have already followed the Acadamy videos to some extent, certainly as far as covering device tree and building some examples.

    Finally, I have another application that sometime I would like to port from nRF52832 to a multi-core MCU. This runs a custom BLE service but at the same time amongst other things it also uses a GPIO & timer to perform some real-time Manchester serial decoding. It works well enough for the intended application but some packets get missed because the softdevice always has higher IRQ priority. If I can offload that to a dedicated core it should work much better.

    he Audio application is not supported for the nRF5340DK out of the box

    Would you mind clarifying that? Or are you just describing what I've already discovered?

    (considering the whole purpose of the nRF5340DK's existence is to evaluate audio applications, it seems crazy not to have ready software support)

    Regards,

    Nick

  • Hi Nick,

    Nick_RA said:
    Multi-faceted. Firstly I need to get a working example of BLE broadcast source [done, from Python build example]. Next, I need to add some basic features e.g. UART CLI to start/stop, change name. Maybe add a custom GATT service. Yes, I purchased a couple of nRF5340DK for the eval/demo. If we decide to develop further then I will move to a custom PCB.

    From this description it looks like starting with the broadcast source version of the Audio application is feasible. Note that the Audio application uses UART for logging by default. If you want to keep logging when using UART for shell, remember to make these changes in the configuration, either in a prj.conf or as CMake options when creating a build configuration.

    Please clarify if you currently only have nRF5340DKs or if you have nRF5340 Audio DKs as well.

    Nick_RA said:
    Finally, I have another application that sometime I would like to port from nRF52832 to a multi-core MCU. This runs a custom BLE service but at the same time amongst other things it also uses a GPIO & timer to perform some real-time Manchester serial decoding. It works well enough for the intended application but some packets get missed because the softdevice always has higher IRQ priority. If I can offload that to a dedicated core it should work much better.

    Is this in the same project as your LE audio project? If not, please create a new ticket if you have issues with porting the application.

    Nick_RA said:
    Would you mind clarifying that? Or are you just describing what I've already discovered?

    I don't mind at all. The nRF5340 Audio application fully supports the nRF5340 Audio DK, not the nRF5340DK.

    Best regards,

    Maria

  • Ah, my bad, sorry. I currently have 2pcs of the *audio* DK.

    The porting of my nRF52832 app is something for the future, nothing to do with this ticket, I just mentioned it as one of the reasons I wanted to know the mechanisms behind the dual core programming.

    Regards,

    Nick

  • No worries! Thank you for the clarifications, and please let me know if I can help with anything.

    Best regards,

    Maria

Reply Children
No Data
Related