nRF5340, evaluate options for fast/reliable data transfer method between cores

I have a project with real-time requirements for a medical device (see my other post about fast BLE/ESB transfer for the exact details), with the main one being that one nRF5340 ("receiver") needs to receive 10-byte packets every 1-2ms from another nRF5340 ("sender") via some wireless protocol (BLE/ESB). I'm currently interested in programming the "receiver" device to use its network core solely for the wireless comms and its application core to take said data and output it via UART. I would imagine the communication/data-transfer between the network core and application core would need to be at least as fast as my "10-bytes-per-ms requirement".

I have read several documentation links, forum posts, etc. The following options seem to be what's available (FYI I'm quite new to learning/working-with these libraries and protocols):

  • RPC - I got the Entropy sample code working, but before trying to adapt it for my project, I wanted to clarify - is the data being sent wirelessly, even in the dual-core device setup? I take it that's why it's considered "remote" but I'm not too sure.
  • IPC
  • OpenAMP - Seems to be based off shared memory (see here)
  • Bluetooth HCI - Seems quite popular but, if it's Bluetooth-based, I'm skeptical about its feasibility. So far, my experiments with getting fast BLE transfer show that BLE isn't fast enough for my project, but ESB most likely will work.
  • Shared SRAM - I would imagine this is the "fastest" option as it's "wired", but I only found some forum posts with unofficial solutions like here. I was wondering if there was official sample code for this, especially ones that can handle race conditions via semaphores/mutexes and the like.

All in all, I'm hoping to get some advice/feedback on what options are out there, when to use this and that, and which ones I should put my efforts into first. Thanks in advance!

------------------------------------------------------------------------------------------

Development Setup:

  • Board: nRF5340-DK
  • Development Environment: VS Code
  • SDK: nRF Connect 2.5.0
  • OS: Windows 10
Parents
  • Investigated IPC (OP replying here)...

    Tried out this sample code and results were wonderful! I could get about 30 packets sent per 1ms, which is more than enough leeway for my particular project. It's been a week or two since I tried the sample code, and my team has reached a good milestone with using IPC, so we may proceed with using this.

  • Hi 

    Sorry, I forgot to answer your specific questions earlier. 

    'Remote' in RPC is not referring to wireless, no. Either it is a wired connection between two chips, or a direct internal connection between two cores like in the nRF5340. 

    afok-esmart said:
    Seems like most/all inter-processor comms between the nRF53 cores ultimately boil down to using shared memory space (see here). That's all I've investigated in this regard.

    Correct. The only way to share data between the two cores is through the use of shared memory. 

    At the lowest level you have the IPC peripheral which is essentially just a way for either core in the nRF5340 to trigger an event (and optionally an interrupt) in the other core. The ipc_service takes this one step further and allows you to send data as well, by utilizing the shared memory mentioned earlier, but is still a relatively light weight wrapper around the IPC periphal. 

    The RPC library is a bit more complex and handles things such as serialization of structures, as well as emulating function calls in a way that makes it seem you are calling a local function rather than a function running on another core. This is convenient, and allows you to implement API's that can run in a very similar way both on single core and multi core systems, but it does add some overhead compared to the ipc_service. 

    Either way it is good to hear that the ipc_service works well for you. If you are planning to send a large number of messages it makes sense to use a more light weight and streamlined interface between the two cores. 

    And in general the more you can buffer into a single transaction, rather than splitting it up into multiple transactions, the less overhead you will get. 

    Best regards
    Torbjørn

  • No worries! It was quite the packed question haha, so I didn't expect answers to all the parts.

    Thank you for the deeper dive into all these protocols/libraries; very nice to know!

    -----

    EDIT1: Just curious, do you know where OpenAMP and Bluetooth HCI "rank"? For example, if IPC is lowest level and RPC is a level above/abstracted from IPC, are these ones even more abstracted and with more overhead?

  • Hi 

    I was wondering the exact same thing after reading through the docs, which is why I didn't mention OpenAMP at all Smiley
    Ideally we should have some diagrams breaking all of this down...

    OpenAMP is one of the possible transport layers for the RPC library, meaning it sits between RPC and the lower layers. 

    So when running RPC through OpenAMP the various layers would look something like this:

    RPC -> OpenAMP -> IPM -> IPC peripheral

    This is introducing yet another related module, the IPM, which is a module designed to handle data passing between cores in multi core systems. 

    When running Bluetooth it looks a bit different, as it is now using the rpmsg service which is also a part of OpenAMP to handle the Bluetooth communication. At this point it should look something like this:

    HCI -> OpenAMP/rpmsg -> IPM -> IPC

    Best regards
    Torbjørn

Reply
  • Hi 

    I was wondering the exact same thing after reading through the docs, which is why I didn't mention OpenAMP at all Smiley
    Ideally we should have some diagrams breaking all of this down...

    OpenAMP is one of the possible transport layers for the RPC library, meaning it sits between RPC and the lower layers. 

    So when running RPC through OpenAMP the various layers would look something like this:

    RPC -> OpenAMP -> IPM -> IPC peripheral

    This is introducing yet another related module, the IPM, which is a module designed to handle data passing between cores in multi core systems. 

    When running Bluetooth it looks a bit different, as it is now using the rpmsg service which is also a part of OpenAMP to handle the Bluetooth communication. At this point it should look something like this:

    HCI -> OpenAMP/rpmsg -> IPM -> IPC

    Best regards
    Torbjørn

Children
No Data
Related