This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

How to increase the sampling rate of the ble_app_uart__saadc_timer_driven__scan_mode example?

Hi,

in my project I have a custom board with a nRF52832 chip that reads SAADC data and transmits this data (peripheral role). This data is received by a nRF52840-DK board (central role) and written into a COM-Port of a pc. I flashed the "ble_app_uart_c" example into the nRF52840-DK and the "ble_app_uart__saadc_timer_driven__scan_mode" example into the custom board with the nRF52832 chip (nRF5_SDK_17.0.0_9d13099). This configuration works fine. The SAADC data gets transmitted by the nRF52832 custom board, received by the nRF52840-DK and written into the COM-Port of my PC.

My problem is the SAADC sampling rate (SAADC_SAMPLE_RATE) of the "ble_app_uart__saadc_timer_driven__scan_mode" example. By default the SAADC sampling rate is 250ms (4Hz). I need to increase this sampling rate to 1ms (1000Hz). But if I choose the SAADC_SAMPLE_RATE < 10ms (>100Hz) an error occured and the chip resets and tries to reconnect. I receive the following error code if I debug the ble_app_uart__saadc_timer_driven__scan_mode example:

which is produced by the ble_nus_data_send(...) function in main.c:

Unfortunaly, I am not able to find a error description for this error code.

Can you please tell me, how to increase the sampling rate of the "ble_app_uart__saadc_timer_driven__scan_mode" example to 1ms (=1000Hz)?

Parents
  • Hello,

    "ble_app_uart__saadc_timer_driven__scan_mode" example into the custom board with the nRF52832 chip (nRF5_SDK_17.0.0_9d13099).

    I am unfamiliar with this code, what example is this - where did you find this project code?
    Is this code a merge between the saadc peripheral example, modified to sample multiple channels, and the ble_app_uart example?

    My problem is the SAADC sampling rate (SAADC_SAMPLE_RATE) of the "ble_app_uart__saadc_timer_driven__scan_mode" example. By default the SAADC sampling rate is 250ms (4Hz). I need to increase this sampling rate to 1ms (1000Hz).

    Could you share with me the SAADC configuration, and how you are modifying it?
    It would also be good to see how you are using the SAADC - how the timer is set up, etc.
    Please use the "Insert -> Code" option when sharing code here on DevZone.

    I receive the following error code if I debug the ble_app_uart__saadc_timer_driven__scan_mode example:
    Unfortunaly, I am not able to find a error description for this error code.

    Could you ensure that DEBUG is defined in your preprocessor defines?
    The included image shows how you can check this, or add DEBUG if it is missing.
    Please run the program again with DEBUG defined, to see the proper error message.

    Can you please tell me, how to increase the sampling rate of the "ble_app_uart__saadc_timer_driven__scan_mode" example to 1ms (=1000Hz)?

    Yes, depending on your answer to my above question we will get started on changing the sampling rate to 1000 Hz, no problem.

    Looking forward to resolving this issue together,

    Best regards,
    Karl 

  • Hi,

    thank you so much for your reply.

    1. I found this project code here: 

    https://github.com/NordicPlayground/nRF52-ADC-examples/tree/master/ble_app_uart__saadc_timer_driven__scan_mode

    You are right, this code is a merge between the saadc peripheral example, modified to sample four channels, and the ble_app_uart example.

    2. Here you can see the SAADC configuration in the "ble_app_uart__saadc_timer_driven__scan_mode" example:

    #define SAADC_SAMPLES_IN_BUFFER         4
    #define SAADC_SAMPLE_RATE               250                                         /**< SAADC sample rate in ms. */               
    
    volatile uint8_t state = 1;
    
    static const nrf_drv_timer_t   m_timer = NRF_DRV_TIMER_INSTANCE(3);
    static nrf_saadc_value_t       m_buffer_pool[2][SAADC_SAMPLES_IN_BUFFER];
    static nrf_ppi_channel_t       m_ppi_channel;
    static uint32_t                m_adc_evt_counter;
    
    void timer_handler(nrf_timer_event_t event_type, void* p_context)
    {
    
    }
    
    
    void saadc_sampling_event_init(void)
    {
        ret_code_t err_code;
        err_code = nrf_drv_ppi_init();
        APP_ERROR_CHECK(err_code);
        
        nrf_drv_timer_config_t timer_config = NRF_DRV_TIMER_DEFAULT_CONFIG;
        timer_config.frequency = NRF_TIMER_FREQ_31250Hz;
        err_code = nrf_drv_timer_init(&m_timer, &timer_config, timer_handler);
        APP_ERROR_CHECK(err_code);
    
        /* setup m_timer for compare event */
        uint32_t ticks = nrf_drv_timer_ms_to_ticks(&m_timer,SAADC_SAMPLE_RATE);
        nrf_drv_timer_extended_compare(&m_timer, NRF_TIMER_CC_CHANNEL0, ticks, NRF_TIMER_SHORT_COMPARE0_CLEAR_MASK, false);
        nrf_drv_timer_enable(&m_timer);
    
        uint32_t timer_compare_event_addr = nrf_drv_timer_compare_event_address_get(&m_timer, NRF_TIMER_CC_CHANNEL0);
        uint32_t saadc_sample_event_addr = nrf_drv_saadc_sample_task_get();
    
        /* setup ppi channel so that timer compare event is triggering sample task in SAADC */
        err_code = nrf_drv_ppi_channel_alloc(&m_ppi_channel);
        APP_ERROR_CHECK(err_code);
        
        err_code = nrf_drv_ppi_channel_assign(m_ppi_channel, timer_compare_event_addr, saadc_sample_event_addr);
        APP_ERROR_CHECK(err_code);
    }
    
    
    void saadc_sampling_event_enable(void)
    {
        ret_code_t err_code = nrf_drv_ppi_channel_enable(m_ppi_channel);
        APP_ERROR_CHECK(err_code);
    }
    
    
    void saadc_callback(nrf_drv_saadc_evt_t const * p_event)
    {
        if (p_event->type == NRF_DRV_SAADC_EVT_DONE)
        {
            ret_code_t err_code;
            uint16_t adc_value;
            uint8_t value[SAADC_SAMPLES_IN_BUFFER*2];
            uint16_t bytes_to_send;
         
            // set buffers
            err_code = nrf_drv_saadc_buffer_convert(p_event->data.done.p_buffer, SAADC_SAMPLES_IN_BUFFER);
            APP_ERROR_CHECK(err_code);
    						
            // print samples on hardware UART and parse data for BLE transmission
            printf("ADC event number: %d\r\n",(int)m_adc_evt_counter);
            for (int i = 0; i < SAADC_SAMPLES_IN_BUFFER; i++)
            {
                printf("%d\r\n", p_event->data.done.p_buffer[i]);
    
                adc_value = p_event->data.done.p_buffer[i];
                value[i*2] = adc_value;
                value[(i*2)+1] = adc_value >> 8;
            }
    
             // Send data over BLE via NUS service. Create string from samples and send string with correct length.
            uint8_t nus_string[50];
            bytes_to_send = sprintf(nus_string, 
                                    "CH0: %d\r\nCH1: %d\r\nCH2: %d\r\nCH3: %d",
                                    p_event->data.done.p_buffer[0],
                                    p_event->data.done.p_buffer[1],
                                    p_event->data.done.p_buffer[2],
                                    p_event->data.done.p_buffer[3]);
    
            err_code = ble_nus_data_send(&m_nus, nus_string, &bytes_to_send, m_conn_handle);
            if ((err_code != NRF_ERROR_INVALID_STATE) && (err_code != NRF_ERROR_NOT_FOUND))
            {
                APP_ERROR_CHECK(err_code);
            }
    	
            m_adc_evt_counter++;
        }
    }
    
    
    void saadc_init(void)
    {
        ret_code_t err_code;
    	
        nrf_drv_saadc_config_t saadc_config = NRF_DRV_SAADC_DEFAULT_CONFIG;
        saadc_config.resolution = NRF_SAADC_RESOLUTION_12BIT;
    	
        nrf_saadc_channel_config_t channel_0_config =
            NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN4);
        channel_0_config.gain = NRF_SAADC_GAIN1_4;
        channel_0_config.reference = NRF_SAADC_REFERENCE_VDD4;
    	
        nrf_saadc_channel_config_t channel_1_config =
            NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN5);
        channel_1_config.gain = NRF_SAADC_GAIN1_4;
        channel_1_config.reference = NRF_SAADC_REFERENCE_VDD4;
    	
        nrf_saadc_channel_config_t channel_2_config =
            NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN6);
        channel_2_config.gain = NRF_SAADC_GAIN1_4;
        channel_2_config.reference = NRF_SAADC_REFERENCE_VDD4;
    	
        nrf_saadc_channel_config_t channel_3_config =
            NRF_DRV_SAADC_DEFAULT_CHANNEL_CONFIG_SE(NRF_SAADC_INPUT_AIN7);
        channel_3_config.gain = NRF_SAADC_GAIN1_4;
        channel_3_config.reference = NRF_SAADC_REFERENCE_VDD4;				
    	
        err_code = nrf_drv_saadc_init(&saadc_config, saadc_callback);
        APP_ERROR_CHECK(err_code);
    
        err_code = nrf_drv_saadc_channel_init(0, &channel_0_config);
        APP_ERROR_CHECK(err_code);
        err_code = nrf_drv_saadc_channel_init(1, &channel_1_config);
        APP_ERROR_CHECK(err_code);
        err_code = nrf_drv_saadc_channel_init(2, &channel_2_config);
        APP_ERROR_CHECK(err_code);
        err_code = nrf_drv_saadc_channel_init(3, &channel_3_config);
        APP_ERROR_CHECK(err_code);	
    
        err_code = nrf_drv_saadc_buffer_convert(m_buffer_pool[0],SAADC_SAMPLES_IN_BUFFER);
        APP_ERROR_CHECK(err_code);   
        err_code = nrf_drv_saadc_buffer_convert(m_buffer_pool[1],SAADC_SAMPLES_IN_BUFFER);
        APP_ERROR_CHECK(err_code);
    }
    
    /**@brief Application main function.
     */
    int main(void)
    {
        bool erase_bonds;
    
        // Initialize.
        uart_init();
        log_init();
        timers_init();
        buttons_leds_init(&erase_bonds);
        power_management_init();
        ble_stack_init();
        gap_params_init();
        gatt_init();
        services_init();
        advertising_init();
        conn_params_init();
    
        saadc_sampling_event_init();
        saadc_init();
        saadc_sampling_event_enable();
    
        // Start execution.
        printf("\r\nUART started.\r\n");
        NRF_LOG_INFO("Debug logging for UART over RTT started.");
        advertising_start();
    
        // Enter main loop.
        for (;;)
        {
            idle_state_handle();
        }
    }

    I tried to modify the SAADC_SAMPLE_RATE to 1. If I set the SAADC_SAMPLE_RATE to 1, I receive the error which I already mentioned.

    3. Where can I find your included image for checking or adding DEBUG? How can I ensure that DEBUG is defined in my preprocessor defines?

    Thank you in advance.

  • Hello,

    thank you for your fast reply.

    I tried to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events in the wrong event handler. Now I am using the right event handler and I am able to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events.

    I implemented a counter which increases when ble_nus_data_send() is called, and decreases when a BLE_GATTS_EVT_HVN_TX_COMPLETE event is received. 

    Overall the counter is pretty stable. The counter rarely increases and decreases by 1 or 2.  Sometimes after 5-30 minutes the counter increases by 3. That is enough to cause an overflow (NRF_ERROR_RESOURCES). It looks like, that the buffer is pretty small.

    I think I am able to handle this by increasing the buffer size. I already tried to increase NRF_SDH_BLE_GAP_EVENT_LENGTH, which is 6 by default, but this does not help.

    So here are my questions:

    1. How can I increase the buffer size of the ble_nus_data_send() or the sd_ble_gatts_hvx() function?

    2. Do you suggest to increase the buffer size?

    Thank you very much in advance.

  • Hello,

    Michael01101 said:
    thank you for your fast reply.

    No problem at all, I am happy to help.

    Michael01101 said:
    I tried to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events in the wrong event handler. Now I am using the right event handler and I am able to count the BLE_GATTS_EVT_HVN_TX_COMPLETE events.

    Great, I am glad to hear that it correctly identified and resolved the issue.

    Michael01101 said:
    Do you suggest to increase the buffer size?

    It sounds to me like the case here is a somewhat stable buffer, that is strictly increasing as time passes.
    This leads me to believe that by increasing the buffer size you will just increase the time it takes for an overflow to occur.
    So, instead, I would look at what is causing the buildup in the buffer and how the this can be alleviated. For example, I mentioned earlier that you have a constant 0.5 ms data buildup per connection interval, since you are queuing a notification every 8 ms.
    Did you make any changes based on my previous comments regarding the 0.5 ms data buildup per connection interval?

    By the way, did you also take a look at the contents of the contents of the BLE_GATTS_EVT_HVN_TX_COMPLETE - it contains the number of notifications transmissions completed. It might be useful to take a look at using this. Could you let me know if this value always is 1, or does it sometimes increase?

    Best regards,
    Karl

  • Hello,

    thank you for your quick response.

    Regarding the 0.5ms data builup:

    Sorry, I should have told you this.

    I made the following changes to the code:

    #define MIN_CONN_INTERVAL               MSEC_TO_UNITS(22.5, UNIT_1_25_MS)
    #define MAX_CONN_INTERVAL               MSEC_TO_UNITS(24 , UNIT_1_25_MS)
    #define SAADC_SAMPLE_RATE               1
    
    void saadc_callback(nrf_drv_saadc_evt_t const * p_event)
    {
        if (p_event->type == NRF_DRV_SAADC_EVT_DONE)
        {
            ...
            if (counter == 24){     
                            
                counter = 0;
                bytes_to_send = 120;
        
                if (send_enable == 1){
                
                    err_code = ble_nus_data_send(&m_nus, ch_values, &bytes_to_send, m_conn_handle);
                    
                    if ((err_code != NRF_ERROR_INVALID_STATE) && (err_code != NRF_ERROR_NOT_FOUND)){
                        APP_ERROR_CHECK(err_code); 
                    }
                    counter_Send++;
                }
        
                NRF_LOG_INFO("Counter Send: %d",counter_Send);
        
            }
    	    counter++;
            m_adc_evt_counter++;
        }
    }

    I save the saadc readings for 24ms. So I call the ble_nus_data_send() function every 24ms and send them all togheter (10Bit*4Channels*24=120Byte). "send_enable" is TRUE when it is connected successfully to the central.

    The MIN_CONN_INTERVAL and the MAX_CONN_INTERVAL are equal on the central side.

    Regarding the number of notifications transmissions:

    Counter von TX_COMPLETE: ble_gatts_evt_hvn_tx_complete_t::count. It gets printed every time a BLE_GATTS_EVT_HVN_TX_COMPLETE event happend.

    Counter Send: The counter which increases when ble_nus_data_send() function is called and decreases if a BLE_GATTS_EVT_HVN_TX_COMPLETE event is received.

        ble_gatts_evt_hvn_tx_complete_t const * p_evt_complete = &p_ble_evt->evt.gatts_evt.params.hvn_tx_complete;
    
        switch (p_ble_evt->header.evt_id)
        {
            case BLE_GATTS_EVT_HVN_TX_COMPLETE:
                NRF_LOG_INFO("Counter von TX_COMPLETE: %d",p_evt_complete->count);
                counter_Send--;
                break;
                
            ...

    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter Send: 15
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 15
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 15
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14
    <info> app: Counter von TX_COMPLETE: 1
    <info> app: Counter Send: 14

    It looks like, that a a notification gets buffered for a short amount of time. Then two notifications were sended in one interval.

    So here are my questions:

    1. Does it make sense to call the ble_nus_data_send() function every 24ms. Because 24ms is not a multiple of the 1,25ms tick?

    2. How should I choose the MIN_CONN_INTERVAL and the MAX_CONN_INTERVAL?

    3. It would be possible for me to save SAADC readings up to 100ms before sending them to the central. Do you have any suggestions how to implement this with the ble_nus_data_send() function.

    Thank you very much in advance.

  • Hi, 

    Michael01101 said:

    Sorry, I should have told you this.

    I made the following changes to the code:

    Thank you for telling me.

    Michael01101 said:
    Then two notifications were sended in one interval.

    This is correct, and can happen.
    If your data shift was in the other direction, - 0.5 ms, then this would make sure that your buffer never overflowed - regardless of "when" the function call to send is placed.

    Michael01101 said:
    1. Does it make sense to call the ble_nus_data_send() function every 24ms. Because 24ms is not a multiple of the 1,25ms tick?

    In short; no. If you are not receiving an error then I suppose the SoftDevice goes for "the next best thing". Have you monitored your connection interval while you are in a connection, using the nRF Sniffer?

    Michael01101 said:
    2. How should I choose the MIN_CONN_INTERVAL and the MAX_CONN_INTERVAL?

    What are your applications real-time requirements? I see that you are ok with a delay up to 100 ms before sending, this gives us a lot of leeway.
    If you want it to match exactly ( receive the same number of samples from all channels in each notification ), it would have to be a multiple of 20 ms ( 16 x 1.25 ms ) . This would mean that you get 100 bytes ready to transfer, which is well within the MTU size you currently have set.

    Alternatively, you could have the notification sent with a negative data shift, by for example queuing a notification every 12th sampling ( 12 ms ) with a 10 ms connection interval. This will ensure that the buffer does not overflow, since you are processing data faster than you are creating it. This also fits within your 100 ms requirement.

    What do you think about either of these alternatives?

    In general, please remember that shorter connection intervals increases power consumption. So, if you device is battery powered ( which I suspect that it is not? ), then you would want to pick the largest possible connection interval.

    Best regards,
    Karl

  • Hello,

    thank you for your response.

    Have you monitored your connection interval while you are in a connection, using the nRF Sniffer?

    I haven´t done this yet. I tried to use the nRF52840 dongle for the nRF Sniffer, but unfortunaly the dongle is not supportet by the nRF Sniffer. I already ordered a nRF52-DK for this, which will arrive soon.

    I see that you are ok with a delay up to 100 ms before sending, this gives us a lot of leeway.

    A delay up to 100ms is okey.

    What do you think about either of these alternatives?

    I would prefer the 20ms alternative. I implemented this with a MIN_CONN_INTERVAL and a MAX_CONN_INTERVAL of 20ms and it works fine.

    I also tried to use an 40ms (32 x 1,25ms) interval (MIN_CONN_INTERVAL and MAX_CONN_INTERVAL of 40ms). So every 40ms the ble_nus_data_send() function gets called with 200 bytes of data. This should be within my MTU size. But nearly immediatly after a connection is established between central und peripheral, I receive a NRF_ERROR_RESOURCES. Although I receive for every call of the ble_nus_data_send() function an BLE_GATTS_EVT_HVN_TX_COMPLETE event and it doesn´t looks like a buffer overflow.

    Do you know why this is happening?

    please remember that shorter connection intervals increases power consumption

    My goal is the largest possible connection interval for lower power consumption.

    Thank you very much in advance.

Reply
  • Hello,

    thank you for your response.

    Have you monitored your connection interval while you are in a connection, using the nRF Sniffer?

    I haven´t done this yet. I tried to use the nRF52840 dongle for the nRF Sniffer, but unfortunaly the dongle is not supportet by the nRF Sniffer. I already ordered a nRF52-DK for this, which will arrive soon.

    I see that you are ok with a delay up to 100 ms before sending, this gives us a lot of leeway.

    A delay up to 100ms is okey.

    What do you think about either of these alternatives?

    I would prefer the 20ms alternative. I implemented this with a MIN_CONN_INTERVAL and a MAX_CONN_INTERVAL of 20ms and it works fine.

    I also tried to use an 40ms (32 x 1,25ms) interval (MIN_CONN_INTERVAL and MAX_CONN_INTERVAL of 40ms). So every 40ms the ble_nus_data_send() function gets called with 200 bytes of data. This should be within my MTU size. But nearly immediatly after a connection is established between central und peripheral, I receive a NRF_ERROR_RESOURCES. Although I receive for every call of the ble_nus_data_send() function an BLE_GATTS_EVT_HVN_TX_COMPLETE event and it doesn´t looks like a buffer overflow.

    Do you know why this is happening?

    please remember that shorter connection intervals increases power consumption

    My goal is the largest possible connection interval for lower power consumption.

    Thank you very much in advance.

Children
  • Hello,

    Michael01101 said:
    thank you for your response.

    No problem at all, I am happy to help!

    Michael01101 said:
    I haven´t done this yet. I tried to use the nRF52840 dongle for the nRF Sniffer, but unfortunaly the dongle is not supportet by the nRF Sniffer. I already ordered a nRF52-DK for this, which will arrive soon.

    That is great! The nRF Sniffer is a very powerful tool to wield when developing BLE applications.
    It makes it a lot easier to see what is actually going on in your links.

    Michael01101 said:
    I would prefer the 20ms alternative. I implemented this with a MIN_CONN_INTERVAL and a MAX_CONN_INTERVAL of 20ms and it works fine.

    I am glad to hear that is is working as intended now!

    Michael01101 said:

    nearly immediatly after a connection is established between central und peripheral, I receive a NRF_ERROR_RESOURCES. Although I receive for every call of the ble_nus_data_send() function an BLE_GATTS_EVT_HVN_TX_COMPLETE event and it doesn´t looks like a buffer overflow.

    Do you know why this is happening?



    An exempt from the sd_ble_gatts_hvx function ( which is used in ble_app_uart_send ) reads:

    The number of Handle Value Notifications that can be queued is configured by ble_gatts_conn_cfg_t::hvn_tx_queue_size When the queue is full, the function call will return NRF_ERROR_RESOURCES. A BLE_GATTS_EVT_HVN_TX_COMPLETE event will be issued as soon as the transmission of the notification is complete.

    What is your queue size defined to?

    Michael01101 said:
    My goal is the largest possible connection interval for lower power consumption.

    Then I would go for the largest possible payload with the longest acceptable connection interval, to minimize active radio time.
    Have you seen our Online Power Profiler? It is very useful to see the difference in estimated power consumption for different connection parameters.

    Best regards,
    Karl

  • Hello, 

    sorry for my late response.

    After I had installed the nRF Sniffer, I found the issue. In the ble_app_uart_c example, which I use in my central device, something called ECHOBACK_BLE_UART_DATA was enabled.

    If this is enabled, all data which is received over NUS by the central is sent back to the peripheral over NUS. After I disabled this, I managed to get a stable 40ms connection intervall with 200 bytes for every interval.

    Have you seen our Online Power Profiler

    Thank you for this. I will take a look to this Power Profiler. But I am good for now.

    Finally, thank you for providing such a great support and developing tools.

  • Hello,

    Michael01101 said:
    sorry for my late response.

    It is no problem at all! :) 

    Michael01101 said:

    After I had installed the nRF Sniffer, I found the issue. In the ble_app_uart_c example, which I use in my central device, something called ECHOBACK_BLE_UART_DATA was enabled.

    If this is enabled, all data which is received over NUS by the central is sent back to the peripheral over NUS. After I disabled this, I managed to get a stable 40ms connection intervall with 200 bytes for every interval.



    Fantastic, I am happy to hear that you were able to find the source of the issue, and achieve the desired connection interval and MTU transfer size!

    Michael01101 said:
    Thank you for this. I will take a look to this Power Profiler. But I am good for now.

    Great - it is a very useful tool to estimate the power consumption for different connection parameters, it will definitely be useful to take a look at if you are looking to power optimize your application.

    Michael01101 said:
    Finally, thank you for providing such a great support and developing tools.

    It is no problem at all, I am happy to help.
    I am glad to hear you say that, thank you!

    Please do not hesitate to open a new ticket if you should encounter any issues or questions in the future.

    Good luck with your development!

    Best regards,
    Karl
     

Related