In our code we use NV storage for system settings and calibration data.
I'm trying to understand when to open/close files and which structs to keep bwtween reads/writes. I've looked at the GIT example code, but it only briefly reads or write files.
For our system settings, which change regularly, should we open the file on start-up and keep it open? Or just open, read/write, and then immediately close the file? If we should keep it open, which structs (fds_record_desc_t, p_flash_record, etc.) need to be kept between accesses?
Second, the calibration data file is rarely written but read often. The total required calibration data is a known size, and will be kept in multiple records. Why would we want to use fds_reserve()? Again, should the file be kept open or opened and closed only as required?
In FDS, when reading data, the data is never copied from flash, but it is read directly from flash. The purpose of fds_record_open() is to make sure that you will be able to read the data, without the reading operation being disturbed by the garbage collection operation. The garbage collection operation will erase pages and move records around. Therefore, read operation and garbage collection cannot happen at the same time. And purpose of fds_record_open() and fds_record_close() is to tell FDS is garbage collection can be performed/is an allowed operation or not.
I would therefore recommend you to only have a record "open" when needed.
I cannot see any specific reason to use fds_reserve().
As a suggestion, it would be useful to be able to query the number of writes in the queue. Something like uint8_t fds_get_queue_entries( void ). The only way to know if the queue is full is to try to write and see if it fails. If a function has queued a lot of writes, it would be useful to know when the queue is full (and it should wait) or is empty (and the function can exit). This can be sort-of achieved with callbacks, but it's cumbersome.
The function fds_stat() can give you some information? But not information on when the queue is full.. I will report this feature request internally.