I'm attempting to write a rudimentary file server that takes a filename from a client and responds by sending the data over TCP to the client. I have a working client and server application for the most part but I'm observing some odd behavior, consider the following
while ((num_read = read (file_fd, file_buffer, sizeof (file_buffer))) > 0)
{
if (num_read != write (conn_fd, article_buffer, num_read))
{
perror ("write");
goto out;
}
}
out:
close(file_fd); close(sub_fd);
file_fd is a file descriptor to the file being sent over the network, conn_fd is a file descriptor to a connect()ed TCP socket.
This seems to work for small files, but when my files get larger(megabyte+) it seems that some non-consistent amount of data at the end of the file will fail to transfer.
I suspected the immediate close() statements after write might have something to do with it so I tried a 1 second sleep() before both close() statements and my client successfully received all of the data.
Is there any better way to handle this than doing a sleep() on the server side?