I'm writing a C program to transfer a file of fixed size, a little over 2Mb, from a server to a client. I'm using TCP sockets on Linux and the code I wrote is the following:
Server (sender)
while (1) {
int nread = read(file, buffer, bufsize);
if (nread == 0) // EOF
break;
if (nread < 0) {
// handle errors
}
char* partial = buffer;
while (nread > 0) {
int nwrite = write(socket, partial, nread);
if (nwrite <= 0) {
// handle errors
}
nread -= nwrite;
partial += nwrite;
}
}
// file sent
shutdown(socket, SHUT_WR);
Client (receiver)
while (filesize > 0) {
nread = read(socket, buffer, bufsize);
if (nread == 0) {
// EOF - if we reach this point filesize is still > 0
// so the transfer was incomplete
break;
}
if (nread < 0) {
// handle errors
}
char* partial = buffer;
while (nread > 0) {
nwrite = write(file, partial, nread);
if (nwrite <= 0) {
// handle errors
}
nread -= nwrite;
partial += nwrite;
filesize -= nwrite;
}
}
if (filesize > 0) {
// incomplete transfer
// handle error
}
close(socket);
When testing the code on my laptop (both client and server "are" on localhost and the communication happen on the loopback interface), sometimes the client exits because read received an EOF, and not because it received all filesize bytes. Since i use a shutdown on the server, this should mean that there is no other data to read.
(Note that the server sent all the bytes and executed the shutdown correctly)
Can you explain me why this is happening?
Where are the missing bytes gone?
-----
EDIT 1 - Clarifications
Some users asked a couple of clarifications so i am posting the answers here:
The program is using TCP blocking sockets
The filesize is a fixed value and is hardcoded in both client and server.
No special socket options as, for example, SO_LINGER are enabled/used.
When the error occur, the server (sender) has already sent all the data and executed the shutdown correctly.
The error, as of today, never happened when testing the application with the client and the server on different machines (transfer over a real network interface and not a loopback interface)
EDIT 2
User Cornstalks pointed me to a really interesting article about the, non always reliable, behaviours of TCP.
The article, which is well worth a read, describe a few tricks useful when sending an unknown amount of data between TCP sockets. The tricks described are the followings:
Take advantage of the SO_LINGER option on the sender. This will help to keep the socket open, upon a call to close(2) or shutdown(2), until all the data has successfully been sent.
On the receiver, beware of pending readable data before the actual receiving loop. It could lead to an immediate reset being sent.
Take advantage of shutdown(2) to signal the receiver the the sender has done sending data.
Let the receiver know the size of the file that will be sent before actually sending the file.
Let the receiver acknowledge the sender that the receiving loop is over. This will help to prevent the sender from closing the socket too soon.
After reading the article, i upgraded my code to implement the tricks number 1 and 5.
This is how i implemented trick number 5:
Server (sender)
// sending loop ...
// file sent
shutdown(socket, SHUT_WR);
// wait acknowledgement from the client
ack = read(socket, buffer, bufsize);
if (ack < 0) {
// handle errors
}
Client (receiver)
// receiving loop..
if (filesize > 0) {
// incomplete transfer
// handle error
}
// send acknowledgement to the server
// this will send a FIN and trigger a read = 0 on the server
shutdown(socket, SHUT_WR);
close(socket);
What about tricks number 2, 3 and 4?
Trick number 2 is not needed because as soon as the server accepts the connection the application proceed to the file transfer. NO extra messages are exchanged.
Trick number 3 is already implemented
Trick number 4 is also already implemented. As mentioned earlier the file size is hardcoded, so there is no need to exchange it.
Did this solve my original problem?
NO my problem was not solved. The error is still happening, and as of today, it only happened when testing the application with both client and server on localhost.
What do you think?
Is there a way to prevent this?
You're:
assuming that read fills the buffer, even though
you're defending magnificently against write() not writing the entire buffer.
You need to do (1), and you don't need to do (2) because you're in blocking mode and POSIX assures that write() doesn't return until all the data is written.
A simple version of both loops:
while ((nread = read(inFD, buffer, 0, sizeof buffer)) > 0)
{
write(outFD, buffer, 0, nread);
}
if (nread == -1)
; // error
A more correct version would check the result of write() for errors of course.
Related
So i need to recv an html file from the server to the client, the file is bigger than the buffer so i make several sends. Thats why i have this loop when i recv
while (i = recv(s, buf, TAM_BUFFER, 0)) {
if (i == -1) {
perror(argv[0]);
fprintf(stderr, "%s: error reading result\n", argv[0]);
exit(1);
}
while (i < TAM_BUFFER) {
j = recv(s, &buf[i], TAM_BUFFER - i, 0);
if (j == -1) {
perror(argv[0]);
fprintf(stderr, "%s: error reading result\n", argv[0]);
exit(1);
}
i += j;
}
/* Print out the file line by line. */
printf("%s", buf);
}
the send looks something like this:
while (fgets(buf, sizeof(buf), fp)){
if (send(s, buf, TAM_BUFFER, 0) != TAM_BUFFER) errout(hostname);
}
The problem is the loop never ends, becase it doesnt recv the eof and i is never 0, its just remain blocked there.
I cant do the close to send the eof because after he recv the whole file, the client will ask for another file.
I tryed to send a SIGALRM if the loop stays blocked for longer than 5 seconds but it doesnt work as expected, because the loop wont stop, and it will throw an error.
Also how can i do to be able to recv less than TAM_BUFFER?(in the send, change the TAM_BUFFER -> strlen(buf)) I know i need to change the interior loop, but then ill have the same problem, j will not be 0 never, so i dont know how could i end it.(or maybe i dont need the second loop in this case).
EDIT: i cant send the lenght of the file beucause of the protocol im following
TCP is a protocol used to transport a single unstructured octet stream in each direction. Shutdown of the connection (i.e. EOF) is the only way in TCP to signal to the peer that no more data will be sent in this connection. If you need a different way because you need to distinguish between multiple messages inside the same TCP connection then you need to use an application level protocol which can specify such message boundaries. This is usually done by fixed message size, prefixing the message with a length or by special boundary markers.
If you can't embed payload size in your protocol, you have to identify EOF by closing socket or checking for timeout. You can use select function and set timeout for it, see here Using select and recv to obtain a file from a web server through a socket and https://stackoverflow.com/a/30395738/4490542
Not sure how well I worded the title. I've written a linux domain socket server and client. The client sets a timeout value on the write. If the client can't send all of its data I don't want the server to accept the data that it has sent. Is there way the client can indicate that it didn't send all of the data? Maybe somehow cause the server's read() to fail? The sockets are setup as stream sockets.
So basically I want to know what to do in this case:
ssize_t bytes_written = write(fd, buffer, length);
if (bytes_written == -1)
{
result = -1;
goto done;
}
// I think the only case where we can have write return
// a successful code but not all bytes written is when the
// timeout value has elapsed and some number of bytes have
// been written.
if (bytes_written != length)
{
result = -1;
errno = ETIMEDOUT;
}
.
.
.
done:
if (result == -1)
result = errno;
if (fd != -1)
{
shutdown(fd, SHUT_RDWR);
close(fd);
}
return result;
}
I realize an obvious solution is for the client to send a byte count first and then send the bytes. I was wondering whether there was another way. Also, each message could be a different size.
You can packet your data with a length in the head. If the data doesn't match the length, the server can drop the data.
Hello I have a server program and a client program. The server program is working fine, as in I can telnet to the server and I can read and write in any order (like a chat room) without any issue. However I am now working on my client program and when I use 'select' and check if the socket descriptor is set to read or write, it always goes to write and then is blocked. As in messages do not get through until the client sends some data.
How can I fix this on my client end so I can read and write in any order?
while (quit != 1)
{
FD_ZERO(&read_fds);
FD_ZERO(&write_fds);
FD_SET(client_fd, &read_fds);
FD_SET(client_fd, &write_fds);
if (select(client_fd+1, &read_fds, &write_fds, NULL, NULL) == -1)
{
perror("Error on Select");
exit(2);
}
if (FD_ISSET(client_fd, &read_fds))
{
char newBuffer[100] = {'\0'};
int bytesRead = read(client_fd, &newBuffer, sizeof(newBuffer));
printf("%s",newBuffer);
}
if(FD_ISSET(client_fd, &write_fds))
{
quit = transmit(handle, buffer, client_fd);
}
}
Here is code to transmit function
int transmit(char* handle, char* buffer, int client_fd)
{
int n;
printf("%s", handle);
fgets(buffer, 500, stdin);
if (!strchr(buffer, '\n'))
{
while (fgetc(stdin) != '\n');
}
if (strcmp (buffer, "\\quit\n") == 0)
{
close(client_fd);
return 1;
}
n = write(client_fd, buffer, strlen(buffer));
if (n < 0)
{
error("ERROR writing to socket");
}
memset(buffer, 0, 501);
}
I think you are misinterpreting the use of the writefds parameer of select(): only set the bit when you want to write data to the socket. In other words, if there is no data, do not set the bit.
Setting the bit will check if there is room for writing, and if yes, the bit will remain on. Assuming you are not pumping megabytes of data, there will always be room, so right now you will always call transmit() which waits for input from the command line with fgets(), thus blocking the rest of the program. You have to monitor both the client socket and stdin to keep the program running.
So, check for READ action on stdin (use STDIN_FILENO to get the file descriptor for that), READ on client_fd always and just write() your data to the client_fd if the amount of data is small (if you need to write larger data chunks consider non-blocking sockets).
BTW, you forget to return a proper value at the end of transmit().
Sockets are almost always writable, except when the socket send buffer is full, which indicates that you are sending faster than the receiver is receiving.
So your transmit() function will be entered every time around the loop, so it will read some data from stdin, which blocks until you type something, so nothing happens.
You should only select on writability when a prior send() has returned EWOULDBLOCK/EAGAIN. Otherwise you should just send, when you have something to send.
I would throw this code away and use two or three threads in blocking mode.
select is used to check whether a socket has become ready to read or write. If it is blocking for read then that indicates no data to read. If it is blocking in write, then that indicates the TCP buffer is likely full and the remote end has to read some data so that the socket will allow more data to be written. Since the select blocks until one of the socket descriptions is ready, you also need to use timeout in select to avoid waiting for a long time.
In your specific case, if your remote/receiving end keep reading data from the socket then the select will not block for the write on the other end. Otherwise the tcp buffer will become full on the sender side and select will block. Answers posted also indicate the importance of handling EAGAIN or EWOULDBLOCK.
Sample flow:
while(bytesleft > 0)
then
nbytes = write data
if(nbytes > 0)
bytesleft -= nbytes;
else
if write returns with EAGAIN or EWOULDBLOCK
call poll or select to wait for the socket to be come ready
endif
endif
if poll or select times out
then handle the timeout error(e.g. the remote end did not send the
data within expected time interval)
endif
end while
The code also should include handle error conditions and read/write returning with (For example, write/read returning with 0). Also note read/recv returning 0 indicates the remote end closed the socket.
To receive from the requested web server and transmit it to the client, I am doing the following,
while(1) {
bzero(buffer,65536); //Character buffer of 64KB
ret_val = recv(sockfd, buffer, 65535,0); //sockfd is the socket between web server and proxy server
if(ret_val < 0)
error("Error Reading data from requested server");
send_ret_val = send_all(sock, buffer, strlen(buffer), 0);//sockfd is socket between proxy server and client
if(send_ret_val < 0)
error("Error returning data to client");
if(ret_val == 0)
break;
}
The function send() all transmits all the data there in the buffer and returns 0 else returns a negative value for an error.
The problem is that the server seems to be working fine for text data but cannot handle images and other binary data. When using firefox, I get the error, incompatible compression technique.
Is there a problem in this code or is there a problem somewhere else?
strlen(buffer) truncates when it found null character in the buffer.
Image data is binary data. Binary data may contain null characters in the middle of the image.
You must use the number of bytes received from the recv call to send bytes to client.
Modify following statement
send_ret_val = send_all(sock, buffer, strlen(buffer), 0);
to
send_ret_val = send_all(sock, buffer, ret_val, 0);
Handling images in a proxy server written in C
... is no different from handling any other type of data. If it doesn't work for images, it will break for other kinds of data as well.
bzero(buffer,65536); //Character buffer of 64KB
Cargo-cult programming. Remove it.
ret_val = recv(sockfd, buffer, 65535,0); //sockfd is the socket between web server and proxy server
There's no reason for the length supplied to be different from sizeof buffer here.
if(ret_val < 0)
error("Error Reading data from requested server");
This is only OK if error() prints or accesses errno prior to executing any other system calls, and if it magically causes this loop to exit. After this you need to add:
else if (retval == 0)
break; // end of stream
Then:
send_ret_val = send_all(sock, buffer, strlen(buffer), 0);//sockfd is socket between proxy server and client
This assumes that the data received is null-terminated, which isn't even valid in the case of a text message. It is completely wrong in the case of an image. Change to:
send_ret_val = send_all(sock, buffer, retval , 0);//sockfd is socket between proxy server and client
if(send_ret_val < 0)
error("Error returning data to client");
Again this is only OK if error() prints or accesses errno prior to executing any other system calls, and if it magically causes this loop to exit.
if(ret_val == 0)
break;
You have this in the wrong place.
The function send() all transmits all the data there in the buffer and returns 0 else returns a negative value for an error.
No. It returns -1 or the number of bytes transferred. The only way it can return zero is if you supplied a zero length, which would be completely pointless.
I have a server that sends data to a client every 5 seconds. I want the client to block on read() until the server sends some data and then print it. I know read () is blocking by default. My problem is that my client is not blocking on read(). This is very odd and this does not seem to be a normal issue.
My code prints "Nothing came back" in an infinite loop. I am on a linux machine, programming in c. My code snippet is below. Please advice.
while(1)
{
n = read(sockfd, recvline, MAXLINE);
if ( n > 0)
{
recvline[n] = 0;
if (fputs(recvline, stdout) == EOF)
printf("fputs error");
}
else if(n == 0)
printf("Nothing came back");
else if (n < 0)
printf("read error");
}
return;
There may be several cause and several exceptions are possible at different place:
check socket where you create:
sockfd=socket(AF_INET,SOCK_STREAM,0);
if (sockfd==-1) {
perror("Create socket");
}
You and also enable blocking mode explicitly before use it:
// Set the socket I/O mode: In this case FIONBIO
// enables or disables the blocking mode for the
// socket based on the numerical value of iMode.
// If iMode = 0, blocking is enabled;
// If iMode != 0, non-blocking mode is enabled.
ioctl(sockfd, FIONBIO, &iMode);
or you can use setsockopt as below:
struct timeval t;
t.tv_sec = 0;
tv_usec = 0;
setsockopt(
sockfd, // Socket descriptor
SOL_SOCKET, // To manipulate options at the sockets API level
SO_RCVTIMEO,// Specify the receiving or sending timeouts
const void *(&t), // option values
sizeof(t)
);
Check Read function call (Reason of bug)
n = read(sockfd, recvline, MAXLINE);
if(n < 0){
perror("Read Error:");
}
Also check server code!:
May your server send some blank(non-printable, null, enter) charter(s). And your are unaware of this. Bug you server code too.
Or your server terminated before your client can read.
One more interesting thing, Try to understand:
When you call N write() at server its not necessary there should be N read() call at other side.
What Greg Hewgill already wrote as a comment: An EOF (that is, an explicit stop of writing, be it via close() or via shutdown()) will be communicated to the receiving side by having recv() return 0. So if you get 0, you know that there won't be any data and you can terminate the reading loop.
If you had non-blocking enabled and there are no data, you will get -1 and errno will be set to EAGAIN or EWOULDBLOCK.
What is the value of MAXLINE?
If the value is 0, then it will return 0 as well.
Otherwise, as Grijesh Chauhan mention, set it explcitly to blocking.
Or, you may also consider using recv() where blocking and non-blocking can be specified.
It has the option, MSG_WAITALL, where it will block until all bytes arrived.
n = recv(sockfd, recvline, MAXLINE, MSG_WAITALL);