This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

How to increase the sampling rate of the ble_app_uart__saadc_timer_driven__scan_mode example?

Hi,

in my project I have a custom board with a nRF52832 chip that reads SAADC data and transmits this data (peripheral role). This data is received by a nRF52840-DK board (central role) and written into a COM-Port of a pc. I flashed the "ble_app_uart_c" example into the nRF52840-DK and the "ble_app_uart__saadc_timer_driven__scan_mode" example into the custom board with the nRF52832 chip (nRF5_SDK_17.0.0_9d13099). This configuration works fine. The SAADC data gets transmitted by the nRF52832 custom board, received by the nRF52840-DK and written into the COM-Port of my PC.

My problem is the SAADC sampling rate (SAADC_SAMPLE_RATE) of the "ble_app_uart__saadc_timer_driven__scan_mode" example. By default the SAADC sampling rate is 250ms (4Hz). I need to increase this sampling rate to 1ms (1000Hz). But if I choose the SAADC_SAMPLE_RATE < 10ms (>100Hz) an error occured and the chip resets and tries to reconnect. I receive the following error code if I debug the ble_app_uart__saadc_timer_driven__scan_mode example:

which is produced by the ble_nus_data_send(...) function in main.c:

Unfortunaly, I am not able to find a error description for this error code.

Can you please tell me, how to increase the sampling rate of the "ble_app_uart__saadc_timer_driven__scan_mode" example to 1ms (=1000Hz)?

  • Hello,

    Michael01101 said:
    sorry for my late response.

     No problem at all, do not worry about it.

    Michael01101 said:
    1. Is this the right place for counting the BLE_GATTS_EVT_HVN_TX_COMPLETE events?

    Yes, if this is the event handler that is provided during the "ble_stack_init" call.
    The code that is setting this up looks like this:

        // Register a handler for BLE events.
        NRF_SDH_BLE_OBSERVER(m_ble_observer, APP_BLE_OBSERVER_PRIO, ble_evt_handler, NULL);


    If you have changed the name of the event handler, or created a new one, you will need to register it as an observer to the BLE events, so that it will receive them.
    I see from the github repository code that the name of the event handler is ble_evt_handler but that the name of the event handler you are referring to in your reply is ble_nus_on_ble_evt - have you made sure this event handler is registered as an observer to the ble events?

    Looking forward to solving this issue together,

    Best regards,
    Karl

  • Hello,

    thank you for your fast reply.

    I tried to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events in the wrong event handler. Now I am using the right event handler and I am able to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events.

    I implemented a counter which increases when ble_nus_data_send() is called, and decreases when a BLE_GATTS_EVT_HVN_TX_COMPLETE event is received. 

    Overall the counter is pretty stable. The counter rarely increases and decreases by 1 or 2.  Sometimes after 5-30 minutes the counter increases by 3. That is enough to cause an overflow (NRF_ERROR_RESOURCES). It looks like, that the buffer is pretty small.

    I think I am able to handle this by increasing the buffer size. I already tried to increase NRF_SDH_BLE_GAP_EVENT_LENGTH, which is 6 by default, but this does not help.

    So here are my questions:

    1. How can I increase the buffer size of the ble_nus_data_send() or the sd_ble_gatts_hvx() function?

    2. Do you suggest to increase the buffer size?

    Thank you very much in advance.

  • Hello,

    Michael01101 said:
    thank you for your fast reply.

    No problem at all, I am happy to help.

    Michael01101 said:
    I tried to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events in the wrong event handler. Now I am using the right event handler and I am able to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events.

    Great, I am glad to hear that it correctly identified and resolved the issue.

    Michael01101 said:
    Do you suggest to increase the buffer size?

    It sounds to me like the case here is a somewhat stable buffer, that is strictly increasing as time passes.
    This leads me to believe that by increasing the buffer size you will just increase the time it takes for an overflow to occur.
    So, instead, I would look at what is causing the buildup in the buffer and how the this can be alleviated. For example, I mentioned earlier that you have a constant 0.5 ms data buildup per connection interval, since you are queuing a notification every 8 ms.
    Did you make any changes based on my previous comments regarding the 0.5 ms data buildup per connection interval?

    By the way, did you also take a look at the contents of the contents of the BLE_GATTS_EVT_HVN_TX_COMPLETE - it contains the number of notifications transmissions completed. It might be useful to take a look at using this. Could you let me know if this value always is 1, or does it sometimes increase?

    Best regards,
    Karl

  • Hello,

    thank you for your quick response.

    Regarding the 0.5ms data builup:

    Sorry, I should have told you this.

    I made the following changes to the code:

    #define MIN_CONN_INTERVAL               MSEC_TO_UNITS(22.5, UNIT_1_25_MS)
    #define MAX_CONN_INTERVAL               MSEC_TO_UNITS(24 , UNIT_1_25_MS)
    #define SAADC_SAMPLE_RATE               1
    
    void saadc_callback(nrf_drv_saadc_evt_t const * p_event)
    {
        if (p_event->type == NRF_DRV_SAADC_EVT_DONE)
        {
            ...
            if (counter == 24){     
                            
                counter = 0;
                bytes_to_send = 120;
        
                if (send_enable == 1){
                
                    err_code = ble_nus_data_send(&m_nus, ch_values, &bytes_to_send, m_conn_handle);
                    
                    if ((err_code != NRF_ERROR_INVALID_STATE) && (err_code != NRF_ERROR_NOT_FOUND)){
                        APP_ERROR_CHECK(err_code); 
                    }
                    counter_Send++;
                }
        
                NRF_LOG_INFO("Counter Send: %d",counter_Send);
        
            }
    	    counter++;
            m_adc_evt_counter++;
        }
    }

    I save the saadc readings for 24ms. So I call the ble_nus_data_send() function every 24ms and send them all togheter (10Bit*4Channels*24=120Byte). "send_enable" is TRUE when it is connected successfully to the central.

    The MIN_CONN_INTERVAL and the MAX_CONN_INTERVAL are equal on the central side.

    Regarding the number of notifications transmissions:

    Counter von TX_COMPLETE: ble_gatts_evt_hvn_tx_complete_t::count. It gets printed every time a BLE_GATTS_EVT_HVN_TX_COMPLETE event happend.

    Counter Send: The counter which increases when ble_nus_data_send() function is called and decreases if a BLE_GATTS_EVT_HVN_TX_COMPLETE event is received.

        ble_gatts_evt_hvn_tx_complete_t const * p_evt_complete = &p_ble_evt->evt.gatts_evt.params.hvn_tx_complete;
    
        switch (p_ble_evt->header.evt_id)
        {
            case BLE_GATTS_EVT_HVN_TX_COMPLETE:
                NRF_LOG_INFO("Counter von TX_COMPLETE: %d",p_evt_complete->count);
                counter_Send--;
                break;
                
            ...

    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter Send: 15
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 15
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 15
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14

    It looks like, that a a notification gets buffered for a short amount of time. Then two notifications were sended in one interval.

    So here are my questions:

    1. Does it make sense to call the ble_nus_data_send() function every 24ms. Because 24ms is not a multiple of the 1,25ms tick?

    2. How should I choose the MIN_CONN_INTERVAL and the MAX_CONN_INTERVAL?

    3. It would be possible for me to save SAADC readings up to 100ms before sending them to the central. Do you have any suggestions how to implement this with the ble_nus_data_send() function.

    Thank you very much in advance.

  • Hi, 

    Michael01101 said:

    Sorry, I should have told you this.

    I made the following changes to the code:

    Thank you for telling me.

    Michael01101 said:
    Then two notifications were sended in one interval.

    This is correct, and can happen.
    If your data shift was in the other direction, - 0.5 ms, then this would make sure that your buffer never overflowed - regardless of "when" the function call to send is placed.

    Michael01101 said:
    1. Does it make sense to call the ble_nus_data_send() function every 24ms. Because 24ms is not a multiple of the 1,25ms tick?

    In short; no. If you are not receiving an error then I suppose the SoftDevice goes for "the next best thing". Have you monitored your connection interval while you are in a connection, using the nRF Sniffer?

    Michael01101 said:
    2. How should I choose the MIN_CONN_INTERVAL and the MAX_CONN_INTERVAL?

    What are your applications real-time requirements? I see that you are ok with a delay up to 100 ms before sending, this gives us a lot of leeway.
    If you want it to match exactly ( receive the same number of samples from all channels in each notification ), it would have to be a multiple of 20 ms ( 16 x 1.25 ms ) . This would mean that you get 100 bytes ready to transfer, which is well within the MTU size you currently have set.

    Alternatively, you could have the notification sent with a negative data shift, by for example queuing a notification every 12th sampling ( 12 ms ) with a 10 ms connection interval. This will ensure that the buffer does not overflow, since you are processing data faster than you are creating it. This also fits within your 100 ms requirement.

    What do you think about either of these alternatives?

    In general, please remember that shorter connection intervals increases power consumption. So, if you device is battery powered ( which I suspect that it is not? ), then you would want to pick the largest possible connection interval.

    Best regards,
    Karl

Related