I have a simple TCP client application running on the nRF9160. After connection to the server, recv is called and writes the data over Segger RTT channel 1, where on the host JLinkRTTLogger.exe logs the data to a file.
The server is the unix netcat utility that is sending a binary file of random data (256kB) from stdin.
If revc is called with no latency, the file is received in its entirety and verified with the original file using the diff utility. However, if a sleep statement is added in the recv loop, simulating processing the chunk of data (like erasing and writing to flash) the stream drops data to the application in 708 byte chunks and the received file is corrupted. The connection is monitored with wireshark on the server and shows the entire file is sent, indicated by the acknowledgment byte count.
My expectation with the TCP would be the flow-control/backpressure mechanism to throttle the connection and the sender would send data when the receiver has availability in its Receive Window. It does seem to be doing this in the modem firmware as I can see the Receive Window for the client shrink and grow dynamically in wireshark. The current behavior through the entire stack to the application violates the TCP protocol of reliable transmission. I can hack around this limitation by adding complexity to the application layer ... but this problem has been solved with the TCP standard. I would also rather the connection be closed than random data in the TCP stream be dropped and not sent to the application.
Is there an option/configuration I am missing that would fix this issue?
Modem firmware: 0.7.0-2.9alpha
SW version: see west.yml