Not sure how well I worded the title. I've written a linux domain socket server and client. The client sets a timeout value on the write. If the client can't send all of its data I don't want the server to accept the data that it has sent. Is there way the client can indicate that it didn't send all of the data? Maybe somehow cause the server's read() to fail? The sockets are setup as stream sockets.
So basically I want to know what to do in this case:
ssize_t bytes_written = write(fd, buffer, length);
if (bytes_written == -1)
{
result = -1;
goto done;
}
// I think the only case where we can have write return
// a successful code but not all bytes written is when the
// timeout value has elapsed and some number of bytes have
// been written.
if (bytes_written != length)
{
result = -1;
errno = ETIMEDOUT;
}
.
.
.
done:
if (result == -1)
result = errno;
if (fd != -1)
{
shutdown(fd, SHUT_RDWR);
close(fd);
}
return result;
}
I realize an obvious solution is for the client to send a byte count first and then send the bytes. I was wondering whether there was another way. Also, each message could be a different size.
You can packet your data with a length in the head. If the data doesn't match the length, the server can drop the data.
Related
I'm testing my socket code which is used to transfer a text-based file, and I'm writing this code by referring the book Unix Network Programming (Chinese Version). Briefly I will paste some code below:
My serve_client function:
void serve_client(int connfd, const char *filename, size_t filesize)
{
char header[1024];
int fd = open(filename, O_RDONLY, 0);
char *file_mapped;
if (fd == -1)
{
char *not_found = "HTTP/1.1 404 NOT FOUND\r\n";
send(connfd, not_found, strlen(not_found), 0);
}
else
{
sprintf(header, "HTTP/1.1 200 OK\r\n");
sprintf(header, "%sContent-Length: %u\r\n", header, filesize);
sprintf(header, "%sContent-Type: text/plain; charset=utf-8\r\n\r\n", header);
// send http response header
send(connfd, header, strlen(header), 0);
printf("Response headers:\n");
printf("%s", header);
file_mapped = (char *)mmap(0, filesize, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
// send http response body
send(connfd, file_mapped, filesize, 0);
int unmapped = munmap(file_mapped, filesize);
if (unmapped == -1)
{
perror("memory unmapped failed!");
_exit(1);
}
}
}
There are several questions I would like to ask you guys:
After this serve_client() function successfully returns, I mean at least the data I need should be completely copied into the kernel buffer, to be sent in the near future. Am I right about this?
shutdown() function is called as below:
serve_client(connfd, path, st.st_size);
shutdown(connfd, SHUT_WR);
// thread or process ends
I check the tips mentioned in this book, it says that this function with this option SHUT_WR will cause the data remained in kernel buffer firstly to be sent and then the final FIN. Is that right?
I capture the data sent and received with WireShark, as the photo illustrated below:
https://i.imgur.com/Xu8gAgh.jpg
I saw that the RST arrived, before all the data showed up. which failed the client e.g. wget or just web access. Any advice would be great.
Now I worked around this issue by doing this, letting the client close the connection and server waits for the FIN arrives. It works. But still, not what I want. :(
while (1)
{
ssize_t bytes_read = recv(connfd, buf, 1024, 0);
if (bytes_read > 0)
{
continue;
}
else if (bytes_read == 0)
{
close(connfd);
break;
}
else
{
// < 0
// handle error
close(connfd);
break;
}
}
EDIT Sorry for the misunderstanding this question caused, the dump showed the RST sent from the server, which is like what I've been told, the process exited prematurely. That's the reason the previous code won't work. Thank you for all your explanations, really helping me better understand the progress under the hood.
Ending a process implicitly close()s all file/socket descriptors. And this is the problem. Closing after sending may cause data loss on the receiver side (depending on the TCP stack's implementation).
You need to implement an application level protocol having the client acknowledge reception of all data before the server may close the socket.
To summarise: Using closure of a socket as part of the application level protocol is not reliable. Do not do this.
To receive from the requested web server and transmit it to the client, I am doing the following,
while(1) {
bzero(buffer,65536); //Character buffer of 64KB
ret_val = recv(sockfd, buffer, 65535,0); //sockfd is the socket between web server and proxy server
if(ret_val < 0)
error("Error Reading data from requested server");
send_ret_val = send_all(sock, buffer, strlen(buffer), 0);//sockfd is socket between proxy server and client
if(send_ret_val < 0)
error("Error returning data to client");
if(ret_val == 0)
break;
}
The function send() all transmits all the data there in the buffer and returns 0 else returns a negative value for an error.
The problem is that the server seems to be working fine for text data but cannot handle images and other binary data. When using firefox, I get the error, incompatible compression technique.
Is there a problem in this code or is there a problem somewhere else?
strlen(buffer) truncates when it found null character in the buffer.
Image data is binary data. Binary data may contain null characters in the middle of the image.
You must use the number of bytes received from the recv call to send bytes to client.
Modify following statement
send_ret_val = send_all(sock, buffer, strlen(buffer), 0);
to
send_ret_val = send_all(sock, buffer, ret_val, 0);
Handling images in a proxy server written in C
... is no different from handling any other type of data. If it doesn't work for images, it will break for other kinds of data as well.
bzero(buffer,65536); //Character buffer of 64KB
Cargo-cult programming. Remove it.
ret_val = recv(sockfd, buffer, 65535,0); //sockfd is the socket between web server and proxy server
There's no reason for the length supplied to be different from sizeof buffer here.
if(ret_val < 0)
error("Error Reading data from requested server");
This is only OK if error() prints or accesses errno prior to executing any other system calls, and if it magically causes this loop to exit. After this you need to add:
else if (retval == 0)
break; // end of stream
Then:
send_ret_val = send_all(sock, buffer, strlen(buffer), 0);//sockfd is socket between proxy server and client
This assumes that the data received is null-terminated, which isn't even valid in the case of a text message. It is completely wrong in the case of an image. Change to:
send_ret_val = send_all(sock, buffer, retval , 0);//sockfd is socket between proxy server and client
if(send_ret_val < 0)
error("Error returning data to client");
Again this is only OK if error() prints or accesses errno prior to executing any other system calls, and if it magically causes this loop to exit.
if(ret_val == 0)
break;
You have this in the wrong place.
The function send() all transmits all the data there in the buffer and returns 0 else returns a negative value for an error.
No. It returns -1 or the number of bytes transferred. The only way it can return zero is if you supplied a zero length, which would be completely pointless.
I'm writing a C program to transfer a file of fixed size, a little over 2Mb, from a server to a client. I'm using TCP sockets on Linux and the code I wrote is the following:
Server (sender)
while (1) {
int nread = read(file, buffer, bufsize);
if (nread == 0) // EOF
break;
if (nread < 0) {
// handle errors
}
char* partial = buffer;
while (nread > 0) {
int nwrite = write(socket, partial, nread);
if (nwrite <= 0) {
// handle errors
}
nread -= nwrite;
partial += nwrite;
}
}
// file sent
shutdown(socket, SHUT_WR);
Client (receiver)
while (filesize > 0) {
nread = read(socket, buffer, bufsize);
if (nread == 0) {
// EOF - if we reach this point filesize is still > 0
// so the transfer was incomplete
break;
}
if (nread < 0) {
// handle errors
}
char* partial = buffer;
while (nread > 0) {
nwrite = write(file, partial, nread);
if (nwrite <= 0) {
// handle errors
}
nread -= nwrite;
partial += nwrite;
filesize -= nwrite;
}
}
if (filesize > 0) {
// incomplete transfer
// handle error
}
close(socket);
When testing the code on my laptop (both client and server "are" on localhost and the communication happen on the loopback interface), sometimes the client exits because read received an EOF, and not because it received all filesize bytes. Since i use a shutdown on the server, this should mean that there is no other data to read.
(Note that the server sent all the bytes and executed the shutdown correctly)
Can you explain me why this is happening?
Where are the missing bytes gone?
-----
EDIT 1 - Clarifications
Some users asked a couple of clarifications so i am posting the answers here:
The program is using TCP blocking sockets
The filesize is a fixed value and is hardcoded in both client and server.
No special socket options as, for example, SO_LINGER are enabled/used.
When the error occur, the server (sender) has already sent all the data and executed the shutdown correctly.
The error, as of today, never happened when testing the application with the client and the server on different machines (transfer over a real network interface and not a loopback interface)
EDIT 2
User Cornstalks pointed me to a really interesting article about the, non always reliable, behaviours of TCP.
The article, which is well worth a read, describe a few tricks useful when sending an unknown amount of data between TCP sockets. The tricks described are the followings:
Take advantage of the SO_LINGER option on the sender. This will help to keep the socket open, upon a call to close(2) or shutdown(2), until all the data has successfully been sent.
On the receiver, beware of pending readable data before the actual receiving loop. It could lead to an immediate reset being sent.
Take advantage of shutdown(2) to signal the receiver the the sender has done sending data.
Let the receiver know the size of the file that will be sent before actually sending the file.
Let the receiver acknowledge the sender that the receiving loop is over. This will help to prevent the sender from closing the socket too soon.
After reading the article, i upgraded my code to implement the tricks number 1 and 5.
This is how i implemented trick number 5:
Server (sender)
// sending loop ...
// file sent
shutdown(socket, SHUT_WR);
// wait acknowledgement from the client
ack = read(socket, buffer, bufsize);
if (ack < 0) {
// handle errors
}
Client (receiver)
// receiving loop..
if (filesize > 0) {
// incomplete transfer
// handle error
}
// send acknowledgement to the server
// this will send a FIN and trigger a read = 0 on the server
shutdown(socket, SHUT_WR);
close(socket);
What about tricks number 2, 3 and 4?
Trick number 2 is not needed because as soon as the server accepts the connection the application proceed to the file transfer. NO extra messages are exchanged.
Trick number 3 is already implemented
Trick number 4 is also already implemented. As mentioned earlier the file size is hardcoded, so there is no need to exchange it.
Did this solve my original problem?
NO my problem was not solved. The error is still happening, and as of today, it only happened when testing the application with both client and server on localhost.
What do you think?
Is there a way to prevent this?
You're:
assuming that read fills the buffer, even though
you're defending magnificently against write() not writing the entire buffer.
You need to do (1), and you don't need to do (2) because you're in blocking mode and POSIX assures that write() doesn't return until all the data is written.
A simple version of both loops:
while ((nread = read(inFD, buffer, 0, sizeof buffer)) > 0)
{
write(outFD, buffer, 0, nread);
}
if (nread == -1)
; // error
A more correct version would check the result of write() for errors of course.
I have a server that sends data to a client every 5 seconds. I want the client to block on read() until the server sends some data and then print it. I know read () is blocking by default. My problem is that my client is not blocking on read(). This is very odd and this does not seem to be a normal issue.
My code prints "Nothing came back" in an infinite loop. I am on a linux machine, programming in c. My code snippet is below. Please advice.
while(1)
{
n = read(sockfd, recvline, MAXLINE);
if ( n > 0)
{
recvline[n] = 0;
if (fputs(recvline, stdout) == EOF)
printf("fputs error");
}
else if(n == 0)
printf("Nothing came back");
else if (n < 0)
printf("read error");
}
return;
There may be several cause and several exceptions are possible at different place:
check socket where you create:
sockfd=socket(AF_INET,SOCK_STREAM,0);
if (sockfd==-1) {
perror("Create socket");
}
You and also enable blocking mode explicitly before use it:
// Set the socket I/O mode: In this case FIONBIO
// enables or disables the blocking mode for the
// socket based on the numerical value of iMode.
// If iMode = 0, blocking is enabled;
// If iMode != 0, non-blocking mode is enabled.
ioctl(sockfd, FIONBIO, &iMode);
or you can use setsockopt as below:
struct timeval t;
t.tv_sec = 0;
tv_usec = 0;
setsockopt(
sockfd, // Socket descriptor
SOL_SOCKET, // To manipulate options at the sockets API level
SO_RCVTIMEO,// Specify the receiving or sending timeouts
const void *(&t), // option values
sizeof(t)
);
Check Read function call (Reason of bug)
n = read(sockfd, recvline, MAXLINE);
if(n < 0){
perror("Read Error:");
}
Also check server code!:
May your server send some blank(non-printable, null, enter) charter(s). And your are unaware of this. Bug you server code too.
Or your server terminated before your client can read.
One more interesting thing, Try to understand:
When you call N write() at server its not necessary there should be N read() call at other side.
What Greg Hewgill already wrote as a comment: An EOF (that is, an explicit stop of writing, be it via close() or via shutdown()) will be communicated to the receiving side by having recv() return 0. So if you get 0, you know that there won't be any data and you can terminate the reading loop.
If you had non-blocking enabled and there are no data, you will get -1 and errno will be set to EAGAIN or EWOULDBLOCK.
What is the value of MAXLINE?
If the value is 0, then it will return 0 as well.
Otherwise, as Grijesh Chauhan mention, set it explcitly to blocking.
Or, you may also consider using recv() where blocking and non-blocking can be specified.
It has the option, MSG_WAITALL, where it will block until all bytes arrived.
n = recv(sockfd, recvline, MAXLINE, MSG_WAITALL);
I'm trying to make process that takes number of requests each second, on each request new thread is created. Each thread then opens socket connection to address (http port) sends HEAD requests, gets response and closes socket.
Problem I'm having comes when i put more then 3 requests per second, after some time i get error in send() part of function, i keep getting Connection Refused. If I input more requests per second i get errors earlier. If i put only 2 requests per second i don't get errors at all. I suspect that I'm running out of some resource but i can't find which.
Here is basic structure of code
//declarations
socketfd = socket(servinfo->ai_family,servinfo->ai_socktype,servinfo->ai_protocol);
arg = fcntl(socketfd, F_GETFL, NULL)) < 0);
arg |= O_NONBLOCK;
fcntl(socketfd, F_SETFL, arg)
if((conn = connect(socketfd, servinfo->ai_addr, servinfo->ai_addrlen)) < 0)
{
if(errno == EINPROGRESS)
{
do
{
tv.tv_sec = CONNECT_TIMEOUT;
tv.tv_usec = 0;
FD_ZERO(&myset);
FD_SET(socketfd, &myset);
if((res = select(socketfd+1, NULL, &myset, NULL, &tv) > 0))
{
if( (arg = fcntl(socketfd, F_GETFL, NULL)) < 0) {
perror("fcntl get 2");
}
arg &= (~O_NONBLOCK);
if( fcntl(socketfd, F_SETFL, arg) < 0) {
perror("fcntl set 2");
}
char szBuf[4096];
std::string htmlreq = "HEAD / HTTP/1.1\r\nHost:";
htmlreq += info->hostName;
htmlreq += "\r\n\r\n";
if((conn = send(socketfd,htmlreq.c_str(),htmlreq.size(),0)) == -1 && errno != EINTR)
{
perror("send");
close(socketfd);
return;
}
if((conn = recv(socketfd,szBuf,sizeof(szBuf)+1,0)) < 0 && errno != EINTR)
{
perror("recv");
close(socketfd);
return ;
}
close(socketfd);
// do stuff with data
break;
}
else
{
//timeout
break;
}
}while(1);
}
else
{
perror("connect");
close(socketfd);
return;
}
}
I removed some error checking from start, what i get as output is "Send: Connection Refused" after some time. I'd appreciate some pointers to what part could be causing problems, platform is ubuntu linux. I'd also be glad to post other parts of code if needed. Tnx in advance.
The resource you're probably running out of is on the server you're connecting to. The connection is being refused by the computer you're connecting to because it's either:
Configure to limit the number of connections per second ( based on some criteria )
Or the server you're connecting to is under too much load for some reason and can't take any more connections.
Since you always get the error on the third connection it could be that the server you're connecting to limits the number of connections on a per IP basis.
Edit1
You're trying to do a non-blocking connect? Now that I look at it closer it sounds like your problem is with the select, as in select is returning that the socket is readable before it's actually connected and then you're calling send. One of the things to watch out for on non-blocking connects is that the socket becomes both readable and writeable on error. Which means you need to check for both after select returns otherwise you may be missing whatever the actual error is and seeing the send error instead.
This is from Stevens UNP:
FD_ZERO(&rset);
FD_SET(sockfd, &rset);
wset = rset;
tval.tv_sec = nsec;
tval.tv_usec = 0;
if ( (n = Select(sockfd+1, &rset, &wset, NULL,
nsec ? &tval : NULL)) == 0) {
close(sockfd); /* timeout */
errno = ETIMEDOUT;
return(-1);
}
if (FD_ISSET(sockfd, &rset) || FD_ISSET(sockfd, &wset)) {
len = sizeof(error);
if (getsockopt(sockfd, SOL_SOCKET, SO_ERROR, &error, &len) < 0)
return(-1); /* Solaris pending error */
} else
err_quit("select error: sockfd not set");
done:
Fcntl(sockfd, F_SETFL, flags); /* restore file status flags */
if (error) {
close(sockfd); /* just in case */
errno = error;
return(-1);
}
return(0);
There are quite a few problems in your code.
First you set the socket to non blocking. I don't understand why you do this. The connect function has an internal timeout and so won't block.
Another problem of your code is that the first if statement will skip the instruction block if the connection immediately succeeds ! Which may happen.
You apparently want to first send the HEAD message. There is no real need to make this one non blocking unless you expect the remote server or the network to be very slow and want a time out on it. In this case the select with non blocking socket would make sens.
Once you send the HEAD message, you expect some data in response that you collect with the recv function. Be aware that this function call may return before the whole data sent is received. You need an independent way to determine that all the data sent has been received. Would the server close the connection ? This would detected by the recv function returning 0.
So the recv should be wrapped into a loop where you append to received data to some buffer or a file and quit when recv returns 0. Use a non blocking socket if you want to add a timeout on this recv operation which may indeed block.
But first try without timeouts to be sure it works at full speed without blocking as your current version.
I suspect the initial connect is slow because of name and IP adresse resolution, and gets faster in subsequent calls because data is cached.