Beware that this post is related to an SDK in maintenance mode
More Info: Consider nRF Connect SDK for new designs
This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

FDS Remaining Size

Hi guys,

I've written a wrapper around the FDS (SD) on SDK 15.0.0 and am trying to calculate remaining space as a combination of my reserved records and written records (in word sizes).

My understanding is this:

  • Page Size = 1024
  • Tag Size = 2 (resulting in an effective page size of 1022)
  • Per Record Header = 3

From this I understand the largest record data size should be 1019, with each full record consuming 1022 (header + data). However in reality I can only store up to 1018 of data and fds_stat() reports a words_used change of 1019 when I would have expected to see a change of either 1022 (3 header + 1019 data) or in the event I'm missing something about the overheads 1021 (3 header + 1018 data).

I'm quite confused over how the filesystem is actually working and how space is consumed, would it be possible to provide some clarity?

On a side-note, when GC is ran on a corrupted filesystem, should I expect the corruption flag set by fds_stat to clear next time it's called? As I've also observed from time-to-time after successful GC that this flag doesn't clear.

Many thanks

  • Hi,

    Your reasoning seems correct, and I believe the max record size should be 1019. This is also indicated by the documentation, but I see the same as you, the actual limitation is 1018. You can get record size of 1019 by modifying fds.c (from SDK 15.0.0) as indicated in the diff below.

    diff --git a/components/libraries/fds/fds.c b/components/libraries/fds/fds.c
    index 99ddc66..269792a 100644
    --- a/components/libraries/fds/fds.c
    +++ b/components/libraries/fds/fds.c
    @@ -246,7 +246,7 @@ static bool page_has_space(uint16_t page, uint16_t length_words)
     {
         length_words += m_pages[page].write_offset;
         length_words += m_pages[page].words_reserved;
    -    return (length_words < FDS_PAGE_SIZE);
    +    return (length_words <= FDS_PAGE_SIZE);
     }
     
     
    @@ -367,7 +367,7 @@ static ret_code_t write_space_reserve(uint16_t length_words, uint16_t * p_page)
         bool           space_reserved  = false;
         uint16_t const total_len_words = length_words + FDS_HEADER_SIZE;
     
    -    if (total_len_words >= FDS_PAGE_SIZE - FDS_PAGE_TAG_SIZE)
    +    if (total_len_words > FDS_PAGE_SIZE - FDS_PAGE_TAG_SIZE)
         {
             return FDS_ERR_RECORD_TOO_LARGE;
         }
    

    Regarding the corruption flag I wonder if perhaps you had some records open while doing garbage collection? If so, the pages with those records would be skipped during GC, so you need to close all records first.

  • Thanks Einar,

    I'll make the change, *EDIT*: This change appears to have fixed space usage issue as well.

    I can confirm I had no open files when I had the GC issue and I did verify this at the time by probing with  fds_stat.

  • Curious why this change is not included in latest SDKs

    Does it cause other problems?

Related