This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

SPI & Bluetooth best practices

Hi All,

I'm a little rusty on my embedded C and was wondering if I could get a few broad pointers on the best way to organize this research system I'm putting together.

The system is basic: I have an NRF52840 that talks to a custom IC via SPI and then sends whatever it reads out to an NRF52 dongle via bluetooth. If this was one way then it would be especially straight forward but unfortunately, the NRF52840 can also receive commands from the dongle (over bluetooth) that may or may not trigger specific SPI transactions. It seems to me that the best way to organize the NRF52840 (that's talking to the custom IC) as the peripheral and the dongle as the central device. This way the peripheral can use an interrupt to query the IC and then notify the central device with a new packet.

I'm having a little more trouble with the other main function though (dongle sending commands to the peripheral) though since it's really important that this bluetooth connection runs as fast as possible without dropping any packets.

Is it fair to have a main while loop on all my devices that spins (just checking if there's been an incoming command) when they're in their main interrupt-driven operating mode. Then if there has been a command, they break out of their interrupt-driven mode and work on whatever the command is? I worry that this constant spinning may burn unnecessary power and also may slow down the main BLE operation (which needs to run as fast as possible!). Are there other alternatives that may allow the bluetooth and spi to operate as fast as possible in parallel?

Thanks so much for any advice/help. If anything was unclear please let me know and I can clarify. I tried to distill everything to not be too confusing but I may have distilled too much.

- Ryan

  • Hello again Ryan,

    ryerye120 said:
    As always, thank you for your patience and help! I realize a lot of these are basic questions - I can't describe how appreciative I am of your time/effort.

    I am glad to hear you say that, and happy to hear that you have found my comments helpful to your project!

    ryerye120 said:
    EDIT: sorry for the double post, I accidentally 'replied' to the wrong message so I copied my response to here so it's easier for you/others to follow.

    No need to apologize - thank you for editing it to make it easier to follow for both me and others!

    ryerye120 said:
    I'm using an nrf52 dongle that's acting like a BLE<->USB pipe. I built this based off of the ble_uart_c example and took stuff from the usbd_ble_uart peripheral when needed. 

    This is a good way to go about implementing this.

    ryerye120 said:
    I've heard of it but haven't used it - I'll order another dev kit and set a sniffer up! I completely forgot this was a thing.

    Great! Please do not hesitate to open another ticket about it if you encounter any issues. The sniffer will make it easy to spot any bottlenecks in your throughput, or other unintended slow-downs.

    ryerye120 said:
    Oh you were spot on - I added it to the release build config. This time I added it to the projects common build config and I'm getting a proper error out now.

    This is a common occurrence, no worries! The proper error logs are very helpful in debugging any driver related issues.

    ryerye120 said:
    "Too many notifications queued."  For that user it seems to have boiled down to their notification queue being too small

    This is the most common reason for the NRF_ERROR_RESOURCES error code being returned from sd_ble_gatts_hvx - queueing notifications faster than they are being sent. Increasing the queue size will only solve the root cause in the case that the queue is too small to hold all the notifications that will be queued each connection interval, and all queued notifications are being sent each connection event.
    Usually, one would additionally have to look at increasing the number of notifications sent per connection event. This could be done by increasing the MTU size, using a different PHY, and increasing the connection event length so that multiple notifications may be sent each connection event.

    ryerye120 said:
    The linked thread implies that this queue size is part of the BLE stack. If that's the case the BLE 'stack' includes not just the BLE drivers but also the memory interface between the CPU & the drivers. Is that right? 

    The SoftDevice will need to be allocated enough FLASH and RAM to be able to meet its configured operation. So, if you increase the buffer sizes, or number of concurrent connections, etc. the SoftDevice may need to be allocated more resources.

    ryerye120 said:
    Firstly, is there a max queue size? Ideally i'd just set it to the max using sd_ble_cfg_set() - BUT I can't find a single reference to sd_ble_cfg_set() in the existing solution. Do you have any pointers to where I may be able to find an example so I can safely call sd_ble_cfg_set() with a new queue size?

    I would rather recommend that you make an estimate for how many notifications that may be queued in a single connection interval, and increase this to account for possible retransmissions (alternatively you could implement specific error handling for the NRF_ERROR_RESOURCES that buffers the notification for later retransmitting).
    sd_ble_cfg_set may be called at any time when the SoftDevice is enabled, as long as the BLE part of the SoftDevice is not yet enabled. Please see the note in the sd_ble_cfg_set API reference for more information about this.

    ryerye120 said:
    Now, assuming I get that set, in order to maximize my throughput, I can set my connection interval to as small as possible (7.5ms, right?), maximize my MTU (to 251, right?), set my PHY to 2M, and then set my total number of links to just 1 (so that I don't have to share bandwidth).

    This depends on whether latency is more important than throughput.
    The highest throughput is not achieved with 7.5 ms a connection interval, but rather with a longer one where you send multiple maximum-length packets.
    This increases the throughput by reducing the number of bytes lost to overhead.
    You could see the exact numbers for the difference here in the throughput documentation I referenced earlier. 
    The increased latency might not be worth it for certain applications, and you can of course still achieve a good throughput using the 7.5 ms connection interval (parameters also specified in the throughput documentation).

    Best regards,
    Karl

  • Karl!


    Great! Please do not hesitate to open another ticket about it if you encounter any issues. The sniffer will make it easy to spot any bottlenecks in your throughput, or other unintended slow-downs.

    Yea sorry for having this ticket spiral out of control - luckily it looks like we may have solved my issues.

    The SoftDevice will need to be allocated enough FLASH and RAM to be able to meet its configured operation. So, if you increase the buffer sizes, or number of concurrent connections, etc. the SoftDevice may need to be allocated more resources.

    This makes a lot of sense. I'll keep an eye out for any more RAM warnings. I actually updated the memory allocation to satisfy my larger MTU - that last increase may have given me enough space for my latest updates.

    I would rather recommend that you make an estimate for how many notifications that may be queued in a single connection interval, and increase this to account for possible retransmissions (alternatively you could implement specific error handling for the NRF_ERROR_RESOURCES that buffers the notification for later retransmitting).

    What I ended up doing was calculating how many notifications I expect to have in 7.5ms ( ~8), rounded that up to 10, and then tripled it. Does that seem fair?

    Right now things seem to be working. I'd like to rubber ducky/walk through my fixes. Hopefully, you'll be able to point out anything seriously wrong in my thought process and if not, others will be able to see my full debrief and learn from it.

    Ultimately my problem was that I was unable to send data over bluetooth as fast as I wanted. My 1Mbps spec is well within BLE5's capabilities but my code wasn't up to snuff. To achieve a demo of what will eventually be a full 1Mbps SPI <-> BLE <-> USB link, I had to make a few distinct changes. I started with the ble_app_uart peripheral example.

    First I had to make sure my peripheral and central devices were operating in the 2M PHY mode. To achieve this I put in an option to run the following block of code after my link was established:

      ble_gap_phys_t const phys =
      {
          .rx_phys = BLE_GAP_PHY_2MBPS,
          .tx_phys = BLE_GAP_PHY_2MBPS,
      };
      err_code = sd_ble_gap_phy_update(m_conn_handle, &phys);
      APP_ERROR_CHECK(err_code);

    To avoid weird specifics, let's just say that every time I click one of my dev kit's buttons, this block is run. 

    Second, I had to increase my notification queue size. This was a little tricky because I'm not that good yet following SDKs, but thanks to Karl's help I was able to figure out where to actually increase my queue size without breaking anything. In ble_stack_init(), there's a function called nrf_sdh_ble_default_cfg_set(). This function helps set certain ble_stack parameters, the MTU, number of links, etc. By default it doesn't update your notification queue size, but you can. Inside nrf_sdh_ble.c, there exists nrf_sdh_ble_default_cfg_set(). I added the below block at the end of the function. After the fact I foung this thread which discusses a similar fix - the OP in that thread was kind of an *** though.

        ble_cfg.conn_cfg.params.gatts_conn_cfg.hvn_tx_queue_size = 30;
        ret_code = sd_ble_cfg_set(BLE_CONN_CFG_GATTS, &ble_cfg, *p_ram_start);
        if (ret_code != NRF_SUCCESS)
        {
            NRF_LOG_ERROR("sd_ble_cfg_set() returned %s when attempting to set BLE_CONN_CFG_GATTS.", nrf_strerror_get(ret_code));
        }

    These two fixes helped me push my BLE transfers to run faster and without breaking but I still wasn't able to run everything as fast as I wanted. The final puzzle piece dawned on me when re-reading something Karl said:

    The highest throughput is not achieved with 7.5 ms a connection interval, but rather with a longer one where you send multiple maximum-length packets.
    This increases the throughput by reducing the number of bytes lost to overhead.

    While I have kept a small connection interval, it occurred to me that I wasn't maximizing how much data I was sending at once. In order to minimize overhead, it's advantageous to pack as many of your samples as possible into a single BLE packet. Due to the nature of my system, I can pack three packets into a maximally sized BLE packet (MTU = 251). Once I implemented this, my code was running without hitting any NRF lack_of_resource errors!

    I am running into some errors on the RX side of my link but that's for another thread!

    Hopefully this helps others - Karl, thank you so much for all of your help!


Related