How to read a large file (About 240MB) from SD card

Hello all, I hope you are doing well. 

I Have a project on NRF52840 in which I have multiple sensors connected to the NRF52 and the sensor data are storing in the SD card in the form of a CSV file of size 240 MB. I want to read this large file from the SD card and then send it to the mobile app using ble_periperhial. I have used This example and modified it, but the problem is that I can only read 220000 bytes from the SD card in the buffer at once and when I am trying to increase 

   #define FILE_SIZE_MAX (Maximum size in bytes of the file to be read from SDCARDgreater than 220000, while the following;

                                            static uint8_t file_buffer[FILE_SIZE_MAX]; 

                                           ff_result = f_read(&file, file_buffer, FILE_SIZE_MAX, (UINT *) &bytes_read);

 Then it gives the following error:

                                      .bss is too large to fit in RAM1 memory segment
                                     .heap is too large to fit in RAM1 memory segment
                                      section .heap overlaps absolute placed section .stack
                                      section .stack VMA [000000002003e000,000000002003ffff] overlaps section .bss VMA [0000000020002c64,0000000020224b08]   

                                               

I don't know about this error and how to solve it. Simply I want to read the whole file (234MB) from the SD card at once and then send it in chunks to the mobile app, Is this possible or do I have to read in chunks from the SD card too? 

Any help regarding this will be highly appreciated.

Parents
  • Hi 

    Unfortunately you only have 256kB of RAM available in the nRF52840 device, and parts of this RAM will be needed by the rest of your application, which is probably why the code won't build if you make your buffer larger than 220000 bytes. 

    In other words you will have to split up the transaction into smaller chunks, yes. 

    The most efficient solution would be to match the reads from your external memory to the size of your Bluetooth packets, and make sure that you buffer multiple BLE packets at once to make sure the BLE communication doesn't have to wait for the reads from external memory. 

    Best regards
    Torbjørn

  • Hi Torbjørn, thanks for the quick help. 

    Unfortunately you only have 256kB of RAM available in the nRF52840 device, and parts of this RAM will be needed by the rest of your application, which is probably why the code won't build if you make your buffer larger than 220000 bytes. 

    Okay, I understand, and thanks for the thorough explanation. 

    The most efficient solution would be to match the reads from your external memory to the size of your Bluetooth packets, and make sure that you buffer multiple BLE packets at once to make sure the BLE communication doesn't have to wait for the reads from external memory. 

    Can you please refer me to a ticket/blog etc where they have read from the SD card in chunks(my Bluetooth packet is 244 bytes)? I have tried different approaches but I am unable to read from an SD card in chunks. I don't know how to change the pointer to the next byte/line of the SD card data. When I read the first 220000 bytes( #define FILE_SIZE_MAX    220000) of data from a file on the SD card then I am unable to change the pointer to the next index( 220000+1th  byte). Can you please help me with that? 

    Thanks & kind Regards,

    Sami

  • Hi Sami

    Are you able to share sniffer traces for the two cases, with the transfer happening in 1M and 2M modes respectively? 

    Also, which phone is showing this behavior?

    I actually have seen similar behavior on the Huawei P20 Pro, where the 2M mode showed significantly lower transfer speed. The reason for this was that when enabling 2M phy the phone was only able to send a single packet in each connection event, while in 1M mode it could send multiple packets. In this case it is clearly a bug on the phone side, where the 2M support is not properly implemented. 

    Best regards
    Torbjørn

  • Hi Torbjørn, thanks for the reply.

    Are you able to share sniffer traces for the two cases, with the transfer happening in 1M and 2M modes respectively? 

    Yes sure,  Actually I am attaching 4 sniffer files:

                 1) 1M PHY conn.interval=50

                 2) 1M PHY conn.interval=10

                 3)  2M PHY conn.interval=50

                 4)  2M PHY conn.interval=10

    The result I have concluded from the 4 traces is that:

    1) Increasing conn. interval in 1M PHY leads to high throughput, that is:

                  Throughput at conn. interval 50 >>  Throughput at conn. interval 10

    2) Increasing conn. interval in 2M PHY leads to decreased(lower) throughput, that is:

                  Throughput at conn. interval 10 >>  Throughput at conn. interval 50

    Your comment about that? I am totally confused about what's going on. 

    Also, which phone is showing this behavior?

    Samsung Galaxy A30, Android version 11.

    I actually have seen similar behavior on the Huawei P20 Pro, where the 2M mode showed significantly lower transfer speed. The reason for this was that when enabling 2M phy the phone was only able to send a single packet in each connection event, while in 1M mode it could send multiple packets. In this case it is clearly a bug on the phone side, where the 2M support is not properly implemented. 

     I don't think so the problem is with the phone because I have tested another project (the same project but Modified by me in which I am reading data from an SD card and sent on ble directly without the involvement of fifo or other temporary storage.), in which case the 2M PHY has achieved a decent throughput (2284066 bytes in about 23-24 seconds) with the same phone. 

    I don't know why this project has a problem with 2M PHY. Have you tested it on your side? 

    If you want more information just let me know and thanks for your through explanation and help.

    Best Regards,

    Sami

    1M_PHY_Sniffer_trace_conn_interval_10.pcapng

    1M_PHY_conn_interval_50_sniffer_trace.pcapng

    2M_PHY_conn_interval_10_sniffer_trace.pcapng

    2M_PHY_conn_interval_50_sniffer_trace.pcapng

  • Hi Sami

    Thanks for sharing all the traces. 

    I only had limited time to work on your case today, so I prioritized fixing my example. 

    You were correct that there were some issues with it. I wasn't sending full length packets in all cases, and also discovered an issue with the code going to sleep even if you had more data to send. 

    I pushed an update to the repo which should fix these issues.

    Are you able to check out the updated example and see if it works better?

    I will have to come back to you on the traces and the 2M phy issues next week. 

    Best regards
    Torbjørn 

  • Hey Torbjørn , thanks.

    Are you able to check out the updated example and see if it works better?

    I am going to test it now and let you know about the outcomes. Thanks for updating the project.

    I will have to come back to you on the traces and the 2M phy issues next week. 

    Okay sir sure, thanks. 

    Best Regards,

    Sami

  • Hey Torbjørn Sir, I hope you will be fine.

    ** UPDATE

    Sir, I have updated the project with the changes you have made, Now I have achieved a decent throughout of ~1063kbps with a very good phone which is capable of achieving a throughput of >1300kbps.

    Now I have some questions, but first, let me explain the different scenarios, and then I will come to my questions. 

    First Scenario: 

    I have another project which is doing the same task (Reading a file from an SD card and sending it to ble) but the logic is different and simple( Reading data from an SD card and send directly to Ble without the use of temporary storage or TX complete event), see the below code snippet.

    #define MIN_CONN_INTERVAL               MSEC_TO_UNITS(20, UNIT_1_25_MS)             /**< Minimum acceptable connection interval (20 ms), Connection interval uses 1.25 ms units. */
    #define MAX_CONN_INTERVAL               MSEC_TO_UNITS(75, UNIT_1_25_MS)             /**< Maximum acceptable connection interval (75 ms), Connection interval uses 1.25 ms units. */
    
    
    static uint8_t file_found_on_sdcard = false;
    static uint8_t file_buffer[FILE_SIZE_MAX];
    static uint32_t file_actual_read_size = 0;
    static uint8_t file_send_to_peripheral = false;
    
    uint32_t timestamp_test_start, timestamp_test_stop;
        // Enter main loop.
        for (;;)
        {
            if(file_send_to_peripheral)
            {
             timestamp_test_start = app_timer_cnt_get();
                if(size > 0)   // size of file
                {
                uint32_t remaining_bytes = size;  
                uint16_t chunk_length = BLE_NUS_MAX_DATA_LEN;
                ret_code_t err_code;
    
                while(remaining_bytes > 0)
                  {
    
                   ff_result = f_read(&file, file_buffer, sizeof(file_buffer), (UINT *) &bytes_read);
                    do
                     {
                     err_code= ble_nus_data_send(&m_nus, file_buffer, &chunk_length, m_conn_handle);
    
                    if ((err_code != NRF_ERROR_INVALID_STATE) &&
                        (err_code != NRF_ERROR_RESOURCES) &&
                        (err_code != NRF_ERROR_NOT_FOUND))
                      {
                          APP_ERROR_CHECK(err_code);
                          remaining_bytes -= chunk_length;
                           m_total_num_bytes += chunk_length;
                       if(remaining_bytes < chunk_length)
                          {
                          chunk_length = remaining_bytes;
                          }
                        //NRF_LOG_INFO("Remaining bytes to send: %d", remaining_bytes);
    
                      }
                     }while((err_code == NRF_ERROR_RESOURCES));
                  }
                }
    
                file_send_to_peripheral = false;
                timestamp_test_stop = app_timer_cnt_get();
                uint32_t diff = app_timer_cnt_diff_compute(timestamp_test_stop, timestamp_test_start);
                uint32_t kbps = (uint32_t)((float)(size * 8) / ((float)diff / 16384.0f) / 1024.0f);
                NRF_LOG_INFO("Test complete! Time passed: %i, ~%i kbps", diff, kbps);
            }
            //idle_state_handle();
            while(NRF_LOG_PROCESS());
        }
    }
    

     

    The throughput achieved by this project is the following.

    The Sniffer trace is the following: 

    MyProject_Sniffer_trace.pcapng

    Questions Related to this Scenario:

    1) If you see the throughput in the above-attached screenshot you will see that sometime the throughput goes to >1200kbps and even >1300kpbs, but the throughput is not stable means the throughput is varying and sometimes goes very low, and sometimes very high. I don't find the reason for this behavior, Can you please explain this? 

    2) The throughput in this case is increasing when I increase the connection intervals and vice versa. I expect to have a high throughput on small connection intervals but it is just the opposite of my expectation. What do you think about this? 

    2) Second Scenario:

    I have made some necessary changes to the project you have provided, see the below code snippet:

    #define MIN_CONN_INTERVAL               MSEC_TO_UNITS(15, UNIT_1_25_MS)             /**< Minimum acceptable connection interval (20 ms), Connection interval uses 1.25 ms units. */
    #define MAX_CONN_INTERVAL               MSEC_TO_UNITS(15, UNIT_1_25_MS)             /**< Maximum acceptable connection interval (75 ms), Connection interval uses 1.25 ms units. */
    
    
    // Create a FIFO structure
    app_fifo_t dummy_fifo;
    
    uint32_t dummy_fifo_bytes_used;
    
    // Create a buffer for the FIFO
    #define DUMMY_FIFO_SIZE 512
    uint8_t dummy_buffer[DUMMY_FIFO_SIZE];
    
    static int dummy_buffer_write(uint8_t *data, uint32_t len)
    {
        app_fifo_write(&dummy_fifo, data, &len);
        dummy_fifo_bytes_used += len;
        return len;
    }
    
    static int dummy_buffer_bytes_free(void)
    {
        return DUMMY_FIFO_SIZE - dummy_fifo_bytes_used;
    }
    
    static int dummy_buffer_read(uint8_t *data, uint32_t len)
    {
        app_fifo_read(&dummy_fifo, data, &len);
        dummy_fifo_bytes_used -= len;
        return len;
    }
    
    static int dummy_buffer_bytes_used(void)
    {
        return dummy_fifo_bytes_used;
    }
    
    static void init_dummy_buffer(void)
    {
        // Initialize FIFO structure
        uint32_t err_code = app_fifo_init(&dummy_fifo, dummy_buffer, DUMMY_FIFO_SIZE);
        APP_ERROR_CHECK(err_code);
    
        dummy_fifo_bytes_used = 0;
    }
    
    
    #define TEST_DUMP_SIZE  size                       // Size of the file in the SD card
    #define TEST_READ_SIZE  BLE_NUS_MAX_DATA_LEN       // BLE_NUS_MAX_DATA_LEN  =244
    #define TEST_WRITE_SIZE BLE_NUS_MAX_DATA_LEN       //m_ble_nus_max_data_len
    uint32_t dummy_test_remaining_bytes_read = 0;
    uint32_t dummy_test_remaining_bytes_write = 0;
    
    /**@brief Application main function.
     */
    int main(void)
    {
        bool erase_bonds;
        uint32_t err_code;
    
        // Initialize.
        uart_init();
        log_init();
        timers_init();
        buttons_leds_init(&erase_bonds);
        power_management_init();
         fatfs_init();
        ble_stack_init();
        gap_params_init();
        gatt_init();
        services_init();
        advertising_init();
        conn_params_init();
    
        init_dummy_buffer();
    
        // Start execution.
        //printf("\r\NRF_ERROR_RESOURCES test started.\r\n");
        NRF_LOG_INFO("Debug logging for UART over RTT started.");
        advertising_start();
         timers_start();
    
        static uint8_t tmp_read_buffer[TEST_READ_SIZE];
        static uint8_t tmp_write_buffer[BLE_NUS_MAX_DATA_LEN];
        uint16_t write_length;
        int bt_written;
    
    
         uint32_t timestamp_test_start, timestamp_test_stop;
        // Enter main loop.
        for (;;)
        {
            // If the run_dummy_test flag is set, start a new data dump test
            if(run_dummy_test) 
            {
                run_dummy_test = false;
                bt_written = 0;
                dummy_test_remaining_bytes_read = TEST_DUMP_SIZE;
                timestamp_test_start = app_timer_cnt_get();
            }
    
            // As long as there are bytes left to read, and there is room in the dummy buffer, read out a single packet
            if(dummy_test_remaining_bytes_read && dummy_buffer_bytes_free() >= TEST_READ_SIZE)
            {
                // Set the read_length to the minimum of the remaining bytes to read and the test read size
                uint32_t read_length = (dummy_test_remaining_bytes_read > TEST_READ_SIZE) ? TEST_READ_SIZE : dummy_test_remaining_bytes_read;
    
                // Read SD card data into a temporary array
                ff_result = f_read(&file, tmp_read_buffer, sizeof(tmp_read_buffer), (UINT *) &bytes_read);
    
                // Move the SD card data from the temporary array into our FIFO buffer
                dummy_buffer_write(tmp_read_buffer, read_length);
    
                // Reduce the remaining number of bytes to read for the test
                dummy_test_remaining_bytes_read -= read_length;
    
                NRF_LOG_DEBUG("Dummy data produced %i, remaining %i", read_length, dummy_test_remaining_bytes_read);
            }
    
            // As long as there is data in the dummy buffer, and the Bluetooth stack has free buffers, upload a packet to the Bluetooth stack
            // If retransmit_previous_buffer is set it means the packet stored in tmp_write_buffer still hasn't been successfully
            // sent (because of NRF_ERROR_RESOURCES). In this case we try to send it again, without reading new data from the FIFO
           if(ble_buffers_available && (dummy_buffer_bytes_used() >= TEST_WRITE_SIZE || retransmit_previous_buffer || \
                                        (dummy_buffer_bytes_used() > 0 && dummy_test_remaining_bytes_read == 0)))
            {
                //bsp_board_led_off(0);
    
                // Read new data from the FIFO unless retransmit_previous_buffer is set, in which case we have to retransmit the last packet
                if(!retransmit_previous_buffer)
                {
                    // Move a new packet from the FIFO to the temporary write buffer
                    write_length = dummy_buffer_read(tmp_write_buffer, TEST_WRITE_SIZE);
    
                    NRF_LOG_DEBUG(" Fifo to tmp %i bytes", write_length);
                } 
    
                // Forward a single packet to the Bluetooth stack
                // NOTE: This code assumes that write_length will not be changed by ble_nus_data_send(..) (this could happen if you try to send a packet
                //       longer than the negotiated MTU size). If this could happen then the code needs to be changed to ensure that the remaining data gets
                //       written afterwards. 
                err_code = ble_nus_data_send(&m_nus, tmp_write_buffer, &write_length, m_conn_handle);
                if(err_code == NRF_SUCCESS)
                {
                    // In case of success, update the bt_written parameter (please note the data will still take some time to reach the Bluetooth client)
                    bt_written += write_length;
                    m_total_num_bytes+= write_length;
                    //NRF_LOG_INFO("   Tmp to BT total: %i (+%i)", bt_written, write_length);
                }
                else if(err_code == NRF_ERROR_RESOURCES) 
                {
                    // In case of NRF_ERROR_RESOURCES, clear the ble_buffers_available flag to avoid trying to send more data until the Bluetooth buffers
                    // clear up. This flag will be set by the BLE_GATTS_EVT_HVN_TX_COMPLETE event in the ble_evt_handler function in main.c
                    ble_buffers_available = false;
    
                    //bsp_board_led_on(0);
    
                    //NRF_LOG_INFO("     ERROR RESOURCES");
                }
                else if(err_code != NRF_ERROR_INVALID_STATE && err_code != NRF_ERROR_NOT_FOUND)
                {
                    APP_ERROR_CHECK(err_code);
                }
                
                // The retransmit_previous_buffer should be cleared for any upload. 
                // If NRF_ERROR_RESOURCES were to happen again it should be set again in the ble_evt_handler
                retransmit_previous_buffer = false;
    
                if(bt_written == size) 
                {
                     timestamp_test_stop = app_timer_cnt_get();
                    uint32_t diff = app_timer_cnt_diff_compute(timestamp_test_stop, timestamp_test_start);
                    uint32_t kbps = (uint32_t)((float)(TEST_DUMP_SIZE * 8) / ((float)diff / 16384.0f) / 1024.0f);
                    // Any other code that should be added upon completion can be added here. 
                    NRF_LOG_INFO("Test complete! Time passed: %i, ~%i kbps", diff, kbps);
                }
            }
            while(NRF_LOG_PROCESS());
            //idle_state_handle();
        }
    }

    The throughput achieved by this project is the following:

     

    The Sniffer trace is the following: 

    Torbjørn_Provided_Project_Sniffer_trace.pcapng

    The throughput is very stable in this case as you can see in the attached screenshot.

    Questions: 

    1)  Why the throughput doesn't go to ~1200kbps or 1300kbps in this case as goes in 1st scenario? I want the throughput to be ~1200kbps or greater, How can this be achieved?

    2) If you see the above sniffer trace, there are a lot of empty packets which I think limit the throughput to ~1000-1060kbps. How to reduce/eliminate these empty packets? 

    3) The throughput, in this case, is Maximum at the connection interval(Min & Max =15) and when I try to increase the connection interval then the throughput decreases, and when decreases the connection interval then it leads to more empty packets which decreases the throughput. Can you please explain this behavior? 

    Best Regards,

    Sami

     

Reply
  • Hey Torbjørn Sir, I hope you will be fine.

    ** UPDATE

    Sir, I have updated the project with the changes you have made, Now I have achieved a decent throughout of ~1063kbps with a very good phone which is capable of achieving a throughput of >1300kbps.

    Now I have some questions, but first, let me explain the different scenarios, and then I will come to my questions. 

    First Scenario: 

    I have another project which is doing the same task (Reading a file from an SD card and sending it to ble) but the logic is different and simple( Reading data from an SD card and send directly to Ble without the use of temporary storage or TX complete event), see the below code snippet.

    #define MIN_CONN_INTERVAL               MSEC_TO_UNITS(20, UNIT_1_25_MS)             /**< Minimum acceptable connection interval (20 ms), Connection interval uses 1.25 ms units. */
    #define MAX_CONN_INTERVAL               MSEC_TO_UNITS(75, UNIT_1_25_MS)             /**< Maximum acceptable connection interval (75 ms), Connection interval uses 1.25 ms units. */
    
    
    static uint8_t file_found_on_sdcard = false;
    static uint8_t file_buffer[FILE_SIZE_MAX];
    static uint32_t file_actual_read_size = 0;
    static uint8_t file_send_to_peripheral = false;
    
    uint32_t timestamp_test_start, timestamp_test_stop;
        // Enter main loop.
        for (;;)
        {
            if(file_send_to_peripheral)
            {
             timestamp_test_start = app_timer_cnt_get();
                if(size > 0)   // size of file
                {
                uint32_t remaining_bytes = size;  
                uint16_t chunk_length = BLE_NUS_MAX_DATA_LEN;
                ret_code_t err_code;
    
                while(remaining_bytes > 0)
                  {
    
                   ff_result = f_read(&file, file_buffer, sizeof(file_buffer), (UINT *) &bytes_read);
                    do
                     {
                     err_code= ble_nus_data_send(&m_nus, file_buffer, &chunk_length, m_conn_handle);
    
                    if ((err_code != NRF_ERROR_INVALID_STATE) &&
                        (err_code != NRF_ERROR_RESOURCES) &&
                        (err_code != NRF_ERROR_NOT_FOUND))
                      {
                          APP_ERROR_CHECK(err_code);
                          remaining_bytes -= chunk_length;
                           m_total_num_bytes += chunk_length;
                       if(remaining_bytes < chunk_length)
                          {
                          chunk_length = remaining_bytes;
                          }
                        //NRF_LOG_INFO("Remaining bytes to send: %d", remaining_bytes);
    
                      }
                     }while((err_code == NRF_ERROR_RESOURCES));
                  }
                }
    
                file_send_to_peripheral = false;
                timestamp_test_stop = app_timer_cnt_get();
                uint32_t diff = app_timer_cnt_diff_compute(timestamp_test_stop, timestamp_test_start);
                uint32_t kbps = (uint32_t)((float)(size * 8) / ((float)diff / 16384.0f) / 1024.0f);
                NRF_LOG_INFO("Test complete! Time passed: %i, ~%i kbps", diff, kbps);
            }
            //idle_state_handle();
            while(NRF_LOG_PROCESS());
        }
    }
    

     

    The throughput achieved by this project is the following.

    The Sniffer trace is the following: 

    MyProject_Sniffer_trace.pcapng

    Questions Related to this Scenario:

    1) If you see the throughput in the above-attached screenshot you will see that sometime the throughput goes to >1200kbps and even >1300kpbs, but the throughput is not stable means the throughput is varying and sometimes goes very low, and sometimes very high. I don't find the reason for this behavior, Can you please explain this? 

    2) The throughput in this case is increasing when I increase the connection intervals and vice versa. I expect to have a high throughput on small connection intervals but it is just the opposite of my expectation. What do you think about this? 

    2) Second Scenario:

    I have made some necessary changes to the project you have provided, see the below code snippet:

    #define MIN_CONN_INTERVAL               MSEC_TO_UNITS(15, UNIT_1_25_MS)             /**< Minimum acceptable connection interval (20 ms), Connection interval uses 1.25 ms units. */
    #define MAX_CONN_INTERVAL               MSEC_TO_UNITS(15, UNIT_1_25_MS)             /**< Maximum acceptable connection interval (75 ms), Connection interval uses 1.25 ms units. */
    
    
    // Create a FIFO structure
    app_fifo_t dummy_fifo;
    
    uint32_t dummy_fifo_bytes_used;
    
    // Create a buffer for the FIFO
    #define DUMMY_FIFO_SIZE 512
    uint8_t dummy_buffer[DUMMY_FIFO_SIZE];
    
    static int dummy_buffer_write(uint8_t *data, uint32_t len)
    {
        app_fifo_write(&dummy_fifo, data, &len);
        dummy_fifo_bytes_used += len;
        return len;
    }
    
    static int dummy_buffer_bytes_free(void)
    {
        return DUMMY_FIFO_SIZE - dummy_fifo_bytes_used;
    }
    
    static int dummy_buffer_read(uint8_t *data, uint32_t len)
    {
        app_fifo_read(&dummy_fifo, data, &len);
        dummy_fifo_bytes_used -= len;
        return len;
    }
    
    static int dummy_buffer_bytes_used(void)
    {
        return dummy_fifo_bytes_used;
    }
    
    static void init_dummy_buffer(void)
    {
        // Initialize FIFO structure
        uint32_t err_code = app_fifo_init(&dummy_fifo, dummy_buffer, DUMMY_FIFO_SIZE);
        APP_ERROR_CHECK(err_code);
    
        dummy_fifo_bytes_used = 0;
    }
    
    
    #define TEST_DUMP_SIZE  size                       // Size of the file in the SD card
    #define TEST_READ_SIZE  BLE_NUS_MAX_DATA_LEN       // BLE_NUS_MAX_DATA_LEN  =244
    #define TEST_WRITE_SIZE BLE_NUS_MAX_DATA_LEN       //m_ble_nus_max_data_len
    uint32_t dummy_test_remaining_bytes_read = 0;
    uint32_t dummy_test_remaining_bytes_write = 0;
    
    /**@brief Application main function.
     */
    int main(void)
    {
        bool erase_bonds;
        uint32_t err_code;
    
        // Initialize.
        uart_init();
        log_init();
        timers_init();
        buttons_leds_init(&erase_bonds);
        power_management_init();
         fatfs_init();
        ble_stack_init();
        gap_params_init();
        gatt_init();
        services_init();
        advertising_init();
        conn_params_init();
    
        init_dummy_buffer();
    
        // Start execution.
        //printf("\r\NRF_ERROR_RESOURCES test started.\r\n");
        NRF_LOG_INFO("Debug logging for UART over RTT started.");
        advertising_start();
         timers_start();
    
        static uint8_t tmp_read_buffer[TEST_READ_SIZE];
        static uint8_t tmp_write_buffer[BLE_NUS_MAX_DATA_LEN];
        uint16_t write_length;
        int bt_written;
    
    
         uint32_t timestamp_test_start, timestamp_test_stop;
        // Enter main loop.
        for (;;)
        {
            // If the run_dummy_test flag is set, start a new data dump test
            if(run_dummy_test) 
            {
                run_dummy_test = false;
                bt_written = 0;
                dummy_test_remaining_bytes_read = TEST_DUMP_SIZE;
                timestamp_test_start = app_timer_cnt_get();
            }
    
            // As long as there are bytes left to read, and there is room in the dummy buffer, read out a single packet
            if(dummy_test_remaining_bytes_read && dummy_buffer_bytes_free() >= TEST_READ_SIZE)
            {
                // Set the read_length to the minimum of the remaining bytes to read and the test read size
                uint32_t read_length = (dummy_test_remaining_bytes_read > TEST_READ_SIZE) ? TEST_READ_SIZE : dummy_test_remaining_bytes_read;
    
                // Read SD card data into a temporary array
                ff_result = f_read(&file, tmp_read_buffer, sizeof(tmp_read_buffer), (UINT *) &bytes_read);
    
                // Move the SD card data from the temporary array into our FIFO buffer
                dummy_buffer_write(tmp_read_buffer, read_length);
    
                // Reduce the remaining number of bytes to read for the test
                dummy_test_remaining_bytes_read -= read_length;
    
                NRF_LOG_DEBUG("Dummy data produced %i, remaining %i", read_length, dummy_test_remaining_bytes_read);
            }
    
            // As long as there is data in the dummy buffer, and the Bluetooth stack has free buffers, upload a packet to the Bluetooth stack
            // If retransmit_previous_buffer is set it means the packet stored in tmp_write_buffer still hasn't been successfully
            // sent (because of NRF_ERROR_RESOURCES). In this case we try to send it again, without reading new data from the FIFO
           if(ble_buffers_available && (dummy_buffer_bytes_used() >= TEST_WRITE_SIZE || retransmit_previous_buffer || \
                                        (dummy_buffer_bytes_used() > 0 && dummy_test_remaining_bytes_read == 0)))
            {
                //bsp_board_led_off(0);
    
                // Read new data from the FIFO unless retransmit_previous_buffer is set, in which case we have to retransmit the last packet
                if(!retransmit_previous_buffer)
                {
                    // Move a new packet from the FIFO to the temporary write buffer
                    write_length = dummy_buffer_read(tmp_write_buffer, TEST_WRITE_SIZE);
    
                    NRF_LOG_DEBUG(" Fifo to tmp %i bytes", write_length);
                } 
    
                // Forward a single packet to the Bluetooth stack
                // NOTE: This code assumes that write_length will not be changed by ble_nus_data_send(..) (this could happen if you try to send a packet
                //       longer than the negotiated MTU size). If this could happen then the code needs to be changed to ensure that the remaining data gets
                //       written afterwards. 
                err_code = ble_nus_data_send(&m_nus, tmp_write_buffer, &write_length, m_conn_handle);
                if(err_code == NRF_SUCCESS)
                {
                    // In case of success, update the bt_written parameter (please note the data will still take some time to reach the Bluetooth client)
                    bt_written += write_length;
                    m_total_num_bytes+= write_length;
                    //NRF_LOG_INFO("   Tmp to BT total: %i (+%i)", bt_written, write_length);
                }
                else if(err_code == NRF_ERROR_RESOURCES) 
                {
                    // In case of NRF_ERROR_RESOURCES, clear the ble_buffers_available flag to avoid trying to send more data until the Bluetooth buffers
                    // clear up. This flag will be set by the BLE_GATTS_EVT_HVN_TX_COMPLETE event in the ble_evt_handler function in main.c
                    ble_buffers_available = false;
    
                    //bsp_board_led_on(0);
    
                    //NRF_LOG_INFO("     ERROR RESOURCES");
                }
                else if(err_code != NRF_ERROR_INVALID_STATE && err_code != NRF_ERROR_NOT_FOUND)
                {
                    APP_ERROR_CHECK(err_code);
                }
                
                // The retransmit_previous_buffer should be cleared for any upload. 
                // If NRF_ERROR_RESOURCES were to happen again it should be set again in the ble_evt_handler
                retransmit_previous_buffer = false;
    
                if(bt_written == size) 
                {
                     timestamp_test_stop = app_timer_cnt_get();
                    uint32_t diff = app_timer_cnt_diff_compute(timestamp_test_stop, timestamp_test_start);
                    uint32_t kbps = (uint32_t)((float)(TEST_DUMP_SIZE * 8) / ((float)diff / 16384.0f) / 1024.0f);
                    // Any other code that should be added upon completion can be added here. 
                    NRF_LOG_INFO("Test complete! Time passed: %i, ~%i kbps", diff, kbps);
                }
            }
            while(NRF_LOG_PROCESS());
            //idle_state_handle();
        }
    }

    The throughput achieved by this project is the following:

     

    The Sniffer trace is the following: 

    Torbjørn_Provided_Project_Sniffer_trace.pcapng

    The throughput is very stable in this case as you can see in the attached screenshot.

    Questions: 

    1)  Why the throughput doesn't go to ~1200kbps or 1300kbps in this case as goes in 1st scenario? I want the throughput to be ~1200kbps or greater, How can this be achieved?

    2) If you see the above sniffer trace, there are a lot of empty packets which I think limit the throughput to ~1000-1060kbps. How to reduce/eliminate these empty packets? 

    3) The throughput, in this case, is Maximum at the connection interval(Min & Max =15) and when I try to increase the connection interval then the throughput decreases, and when decreases the connection interval then it leads to more empty packets which decreases the throughput. Can you please explain this behavior? 

    Best Regards,

    Sami

     

Children
  • Hi Sami

    Samiulhaq said:
    1) If you see the throughput in the above-attached screenshot you will see that sometime the throughput goes to >1200kbps and even >1300kpbs, but the throughput is not stable means the throughput is varying and sometimes goes very low, and sometimes very high. I don't find the reason for this behavior, Can you please explain this? 

    This makes sense. 

    Looking at your traces I see that your code ends up with a connection interval of 45ms, while my code gets an interval of 15ms. 

    A long connection interval will give you a higher peak throughput, because the SoftDevice will not be able to send packets for the entirety of the connection event. 
    There will be a gap at the end of the event equal to the full time it takes to send and receive a single full packet, pluss some time needed for data processing. 
    With 251 byte data length and 2M PHY this gap seems to be around 2.4ms long, based on the traces you shared. 

    The percentage of time you can send data is then given by the following formula: (CI - gap) / CI

    For a connection interval of 15ms this equals around 84%, or 1140kbps

    For a connection interval of 45ms we get around 95%, or 1285kbps

    Based on this you would think that using a longer connection interval is then better for throughput, but this is unfortunately not the case, since a single lost packet in a connection event cancels the entire event. 
    If you are unlucky and drop a packet early in the event you lose all the time left over in the event, which is higher the larger your connection interval is. 

    This is the reason why longer connection intervals gives you a lot more variation in the throughput, because you are a lot more susceptible to packet loss caused by interference or other factors. 

    So to summarize, in ideal conditions a higher connection interval will give you higher throughput, but if you are affected by packet loss the throughput can drop significantly. Just like your traces show, where 15ms gives you a lower peak but much more stable performance. 

    It seems like you have a significant amount of packet loss in your test setup. What is the distance between the nRF device and the phone in your testing?

    Samiulhaq said:
    1)  Why the throughput doesn't go to ~1200kbps or 1300kbps in this case as goes in 1st scenario? I want the throughput to be ~1200kbps or greater, How can this be achieved?

    I guess I explained the reason for this in my previous comment. The peak throughput when using a 15ms connection interval is about 1140kbps. 

    I will check with one of the SoftDevice experts if there is some tweaking that can be done to increase this a little, but I wouldn't bet on it. 

    Anyway, how much difference will it be for the user if you can squeeze out a couple of extra percent more throughput?
    Myself I would be more concerned about phones that have significantly lower throughput than this, because they have their own more stringent restrictions on how often they can send data. 

    Samiulhaq said:
    2) If you see the above sniffer trace, there are a lot of empty packets which I think limit the throughput to ~1000-1060kbps. How to reduce/eliminate these empty packets? 

    Where do you see these empty packets? 

    From the point the peripheral starts sending notifications, at packet number 1936, to the moment it stops, at packet number 21353, I don't see a lot of empty packets (the central keeps sending empty packets, but this is to be expected since the phone doesn't send anything to you). 

    Samiulhaq said:
    3) The throughput, in this case, is Maximum at the connection interval(Min & Max =15) and when I try to increase or decrease the connection interval then the throughput decreases,  why? 

    A connection interval of 15ms seems to be the sweet spot in your particular test environment. Any lower than this and the 2.4ms gap starts to make a bigger impact on the throughput, and any higher and the effect of dropped packets leading to long gaps of no activity starts making a bigger impact. 

    As I mentioned earlier I have often considered 15ms to be a good compromise between reliability and peak throughput, so I am not surprised by your results. I am only a little surprised about the amount of packet loss you get, unless you are keeping the phone and nRF device far apart, or you have a significant amount of interference in the area (from Wi-Fi routers for instance). 

    As a side note, if you find your simplified algorithm to work equally well as my example I see no reason not to go for that implementation instead. 

    As long as you are fine to stay in that loop while sending data, and don't need to run any other operation from your main loop, then it should work fine. 

    I think all the differences you have measured between my implementation and yours is just down to the connection interval used. 

    Best regards
    Torbjørn

  • Hi Torbjørn, thank you so much for this amazing explanation. 

    I understand the logic behind it because of your thorough and well explanation. However, I have some questions related to your explanation and some answers to your questions: 

    With 251 byte data length and 2M PHY this gap seems to be around 2.4ms long, based on the traces you shared. 

    How do you find this gap time(2.4ms) from the trace? 

    For a connection interval of 15ms this equals around 84%, or 1140kbps

    For a connection interval of 45ms we get around 95%, or 1285kbps

    As you said, for the conn. interval of 15ms the max throughput will be 1140kbps but I hardly reached 1090-1098kbps in this case. why has it not reached the 1140kbps? 

    And in the 45ms case, the max throughput is 1285kbps, but sometimes it goes to 1350kbps. How it is going beyond the max throughput? 


    It seems like you have a significant amount of packet loss in your test setup. What is the distance between the nRF device and the phone in your testing?

    Both (phone and nRF) are close to each other (on the same working table/desk).

    Anyway, how much difference will it be for the user if you can squeeze out a couple of extra percent more throughput?

    The file size is very large (about 234MB), that's why I want to maximize the throughput as much as possible(to the highest possible value) to reduce the time taken by the large file to send to the mobile App. 

    Where do you see these empty packets? 

    From the point the peripheral starts sending notifications, at packet number 1936, to the moment it stops, at packet number 21353, I don't see a lot of empty packets (the central keeps sending empty packets, but this is to be expected since the phone doesn't send anything to you). 

    If you observe both traces, the one(Torbjorn_Provided_Project_Sniffer_trace) has more empty packets sent by the central(mobile app) to the peripheral(nRF). I know that 1 empty packet sent by central is a part of the Bluetooth protocol but in the trace, there are 2 empty packets sent( not regularly but time by time) by central to the peripheral which is not/very limited in the other trace(My_project_Sniffer_trace). see the attached screenshot:

    In the trace(I have mentioned) there are a lot more of these patterns than in the other trace. What do you think is the reason for this behavior and why these patterns are less in the other trace? I think this is due to a small conn. interval(15ms), am I right? 

    I am only a little surprised about the amount of packet loss you get, unless you are keeping the phone and nRF device far apart, or you have a significant amount of interference in the area (from Wi-Fi routers for instance). 

    In which project these packets loss are? If it is in my project then I am not surprised because in my opinion I didn't add the logic of BLE_GATTS_EVT_HVN_TX_COMPLETE case and I didn't handle the NRF_ERROR_RESOURCES(please have a look to my logic in the previous reply). My logic is very simple and that may be the reason for the packet drop? I have turned off WI-FI on testing my phone and the phone and nRF are on the same table. 

    Thank you so much and Best Regards,

    Sami

  • Hi Sami

    Samiulhaq said:
    How do you find this gap time(2.4ms) from the trace? 

    Whenever you see two empty packets from the master, where one has a short time delta and the other a longer one, this means that the first packet from the master was not answered by the slave (because of the gap imposed by the SoftDevice), and becomes the last packet of the current connection event. 

    Since the connection event is over the master will then wait for the next connection event before sending a new packet, and the delay between the first empty packet and the second will then signify the 'dead time' at the end of the connection event. 

    Some times this delay will be very large, because of packet loss, but I noticed that many times the dead time was around 2446us. I never noticed the dead time being smaller than this, and for this reason I am assuming that the SoftDevice gap is around this number. 

    Samiulhaq said:
    As you said, for the conn. interval of 15ms the max throughput will be 1140kbps but I hardly reached 1090-1098kbps in this case. why has it not reached the 1140kbps? 

    That is a good question. According to the sniffer trace there is a lot of packet loss over the air, which causes reduced throughput both when using your code and mine. Why you would have so much packet loss when keeping the devices close together I don't know. 
    Are you using custom hardware on the nRF side or a standard DK?
    I would have to assume that the hardware design of the nRF device, the phone or both is not ideal. 
    Also, if you are testing in an office environment you might want to test again at home, to see if the issues are caused by local interference. 

    Samiulhaq said:
    And in the 45ms case, the max throughput is 1285kbps, but sometimes it goes to 1350kbps. How it is going beyond the max throughput? 

    Either my math is wrong or the measurement is wrong Wink

    If the RC clock source is the external crystal then the measurement should be pretty accurate. I will think through the math one more time later on, to see if I have done some miscalculations. 

    Samiulhaq said:
    The file size is very large (about 234MB), that's why I want to maximize the throughput as much as possible(to the highest possible value) to reduce the time taken by the large file to send to the mobile App. 

    There is an old engineering saying; "Premature optimization is the root of all evil". This article summarizes the idea pretty well. 

    If the file is so large it will take time to transfer no matter what. If people get tired waiting for 30 minutes, they will get tired waiting for 25 minutes as well. 

    What about people using iPhones, which never go much beyond 6-700kbps? Or various Android phones that might be all over the place?

    Rather than spending a lot of time to reduce the transfer time by a couple of percent on the fastest phones you might be better of designing the app so that it explains to the user why they need to wait so long, and gives good progress reports along the way. 

    Or have you looked into ways of compressing or optimizing the file, which would benefit all the users regardless of which phone they have?

    To summarize, unless your project is otherwise ready and this is the only remaining optimization you want to do before release, your time might be spent more efficiently elsewhere. 

    Samiulhaq said:
    In the trace(I have mentioned) there are a lot more of these patterns than in the other trace. What do you think is the reason for this behavior and why these patterns are less in the other trace? I think this is due to a small conn. interval(15ms), am I right? 

    Yes, this happens when the nRF stops responding because of the SoftDevice imposed gap. With a long connection interval this is much less likely to happen, because there is a much higher risk the connection event will be ended by a lost packet before reaching the end. Keep in mind that just a single dropped packet is enough to end the connection event. 

    Samiulhaq said:
    In which project these packets loss are? If it is in my project then I am not surprised because in my opinion I didn't add the logic of BLE_GATTS_EVT_HVN_TX_COMPLETE case and I didn't handle the NRF_ERROR_RESOURCES(please have a look to my logic in the previous reply). My logic is very simple and that may be the reason for the packet drop?

    I know nothing about application data loss, by packet loss I mean RF packets lost over the air. By looking at the traces it seems that the reason you get so much variation in throughput is that you have a considerable amount of lost packets between the nRF and the phone. 

    Why this is I don't know, I discussed this in more detail earlier in this reply. 

    Best regards
    Torbjørn

  • Hello Sir, this is such a fantastic explanation. Thank you so much for your valuable time.

    Whenever you see two empty packets from the master, where one has a short time delta and the other a longer one, this means that the first packet from the master was not answered by the slave (because of the gap imposed by the SoftDevice), and becomes the last packet of the current connection event. 

    Since the connection event is over the master will then wait for the next connection event before sending a new packet, and the delay between the first empty packet and the second will then signify the 'dead time' at the end of the connection event. 

    Some times this delay will be very large, because of packet loss, but I noticed that many times the dead time was around 2446us. I never noticed the dead time being smaller than this, and for this reason I am assuming that the SoftDevice gap is around this number. 

    Now from this explanation, I understand the number of packets per connection interval too for which I have read a lot of threads/blogs, thanks a lot. 

    That is a good question. According to the sniffer trace there is a lot of packet loss over the air, which causes reduced throughput both when using your code and mine. Why you would have so much packet loss when keeping the devices close together I don't know. 

    I don't know what you mean by losing packet but when I compare the data(UART Rx field of the Wireshark) of the last packet in the Wireshark with the last data in the actual file, both values are the same. 

    for example, the last data in the file is the following:

    7525,32599,34603,4338,37739,13272,36651,21077,34948,36696,22470,17848,43432,3158,19187,37859,32649,20220111121250

    In the Wireshark/sniffer trace, the last packet has the same data as the actual file. From this, I am concluding that all the data has been sent/received correctly and there is no packet loss, is this the correct way of checking it?Slight smile  

    Are you using custom hardware on the nRF side or a standard DK?

    I am using standard DK(nRF52840).

    Or have you looked into ways of compressing or optimizing the file, which would benefit all the users regardless of which phone they have?

    This is a very good point but is it possible to compress the file using nRF DK? because I have to store the data of the different sensors(connected to the nRF52) in the SD card in the form of a CSV file and from there send that file(automatically) to the mobile app after every 60 hours.

    I have reduced the conn. interval to 20ms in my project and now I can achieve an average throughput of about 1200kbps while sending 22MB files and now the throughput is not varying too much as I have reduced the conn. interval and my throughput are now about 1200kbps and I am ok what that.

    One last question and sorry for taking your valuable time.

    How can I check whether the complete file has been received correctly or not? In my opinion, I have to do a small modification in the nRF Toolbox app and add the functionality of storing the data received via UART in a CSV file, and then compare it to the actual file, is this the efficient way?  Have you any other suggestions/alternatives about this? 

    Once Again thank you so much for your valuable time, explanation, and suggestions. I have learned a lot from you, to be honest. 

    Best Regards,

    Sami

  • Hi Sami

    Samiulhaq said:
    In the Wireshark/sniffer trace, the last packet has the same data as the actual file. From this, I am concluding that all the data has been sent/received correctly and there is no packet loss, is this the correct way of checking it?Slight smile  

    That is correct. The BLE link layer will automatically retransmit any packet that is lost due to interference, and as long as the link doesn't break down completely you are guaranteed that all the packets make it to the other side intact, and in the same order as they were uploaded. So packet loss in this case only means packets lost on air, not actual data loss.
    In a way you can think of a BLE connection as a TCP/IP connection, where data integrity is guaranteed at the cost of data throughput. 

    When this happens you can see from the trace that the next payload will have the same data as the previous, and that the SN and NESN values are the same. This is normally an indication of packet loss over the air leading to a retransmit of the same data. 

    Samiulhaq said:
    This is a very good point but is it possible to compress the file using nRF DK? because I have to store the data of the different sensors(connected to the nRF52) in the SD card in the form of a CSV file and from there send that file(automatically) to the mobile app after every 60 hours.

    The nRF52 can certainly do compression, as long as you have some memory to spare for the algorithms. 

    Looking at your data from the trace it appears to be ASCII coded values, 17x 16-bit values pluss one timestamp pr record if I am not mistaken?

    If this is correct you should be able to store 6 of these records in a 244 byte payload if you encode it to binary. 34 bytes for the 17 16-bit values plus 4 bytes for the timestamp multiplied by 6 equals 228 bytes. Since the binary values are fixed size you don't need to waste any bytes for delimiters or end of record symbols. 

    I think this change alone should yield a reduction in the data to transfer of around 3 times (looking at the traces it seems you get around 2 records through pr packet). 

    Alternatively you could try to find some compression algorithm that you apply to your file before you send it, such as zip or LZ4. 
    The advantage here is that you don't have to manually parse the strings and convert them to binary, but the drawback is that you won't be able to store the entire record (or the entire compressed record) in memory so you probably need to break the compression into smaller steps, which will reduce the efficiency. 

    I would also expect the flash/RAM impact to be higher with such an algorithm, compared to a simple ASCII -> binary conversion. 

    Samiulhaq said:
    I have reduced the conn. interval to 20ms in my project and now I can achieve an average throughput of about 1200kbps while sending 22MB files and now the throughput is not varying too much as I have reduced the conn. interval and my throughput are now about 1200kbps and I am ok what that.

    Good to hear that you found a good compromise on the connection interval. 

    Samiulhaq said:
    How can I check whether the complete file has been received correctly or not? In my opinion, I have to do a small modification in the nRF Toolbox app and add the functionality of storing the data received via UART in a CSV file, and then compare it to the actual file, is this the efficient way?  Have you any other suggestions/alternatives about this? 

    Possibly the best solution is to calculate a checksum over the entire file as it is transmitted, and then transmit the checksum at the end. The the app can do the same thing as it is receiving the file and compare the checksum generated on the receiving end to the one sent by the nRF at the end. If the checksum matches you know that the entire transaction went fine. 

    Technically this should not be necessary as the link layer will make sure all the data is received, like I mentioned earlier, but it is still possible for application errors to lead to loss of data if there is some error in the buffer handling etc. 

    Are you only planning on using the nRF Toolbox app, or will you eventually make a custom application?

    Samiulhaq said:
    Once Again thank you so much for your valuable time, explanation, and suggestions. I have learned a lot from you, to be honest. 

    You welcome, I am glad you found my answers helpful. That's what they pay me for Wink

    Best regards
    Torbjørn

Related