Confused about the behavior of function shutdown(fd, options) - c

I'm testing my socket code which is used to transfer a text-based file, and I'm writing this code by referring the book Unix Network Programming (Chinese Version). Briefly I will paste some code below:
My serve_client function:
void serve_client(int connfd, const char *filename, size_t filesize)
{
char header[1024];
int fd = open(filename, O_RDONLY, 0);
char *file_mapped;
if (fd == -1)
{
char *not_found = "HTTP/1.1 404 NOT FOUND\r\n";
send(connfd, not_found, strlen(not_found), 0);
}
else
{
sprintf(header, "HTTP/1.1 200 OK\r\n");
sprintf(header, "%sContent-Length: %u\r\n", header, filesize);
sprintf(header, "%sContent-Type: text/plain; charset=utf-8\r\n\r\n", header);
// send http response header
send(connfd, header, strlen(header), 0);
printf("Response headers:\n");
printf("%s", header);
file_mapped = (char *)mmap(0, filesize, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
// send http response body
send(connfd, file_mapped, filesize, 0);
int unmapped = munmap(file_mapped, filesize);
if (unmapped == -1)
{
perror("memory unmapped failed!");
_exit(1);
}
}
}
There are several questions I would like to ask you guys:
After this serve_client() function successfully returns, I mean at least the data I need should be completely copied into the kernel buffer, to be sent in the near future. Am I right about this?
shutdown() function is called as below:
serve_client(connfd, path, st.st_size);
shutdown(connfd, SHUT_WR);
// thread or process ends
I check the tips mentioned in this book, it says that this function with this option SHUT_WR will cause the data remained in kernel buffer firstly to be sent and then the final FIN. Is that right?
I capture the data sent and received with WireShark, as the photo illustrated below:
https://i.imgur.com/Xu8gAgh.jpg
I saw that the RST arrived, before all the data showed up. which failed the client e.g. wget or just web access. Any advice would be great.
Now I worked around this issue by doing this, letting the client close the connection and server waits for the FIN arrives. It works. But still, not what I want. :(
while (1)
{
ssize_t bytes_read = recv(connfd, buf, 1024, 0);
if (bytes_read > 0)
{
continue;
}
else if (bytes_read == 0)
{
close(connfd);
break;
}
else
{
// < 0
// handle error
close(connfd);
break;
}
}
EDIT Sorry for the misunderstanding this question caused, the dump showed the RST sent from the server, which is like what I've been told, the process exited prematurely. That's the reason the previous code won't work. Thank you for all your explanations, really helping me better understand the progress under the hood.

Ending a process implicitly close()s all file/socket descriptors. And this is the problem. Closing after sending may cause data loss on the receiver side (depending on the TCP stack's implementation).
You need to implement an application level protocol having the client acknowledge reception of all data before the server may close the socket.
To summarise: Using closure of a socket as part of the application level protocol is not reliable. Do not do this.

Related

How to destructively close a socket when my write() times out?

Not sure how well I worded the title. I've written a linux domain socket server and client. The client sets a timeout value on the write. If the client can't send all of its data I don't want the server to accept the data that it has sent. Is there way the client can indicate that it didn't send all of the data? Maybe somehow cause the server's read() to fail? The sockets are setup as stream sockets.
So basically I want to know what to do in this case:
ssize_t bytes_written = write(fd, buffer, length);
if (bytes_written == -1)
{
result = -1;
goto done;
}
// I think the only case where we can have write return
// a successful code but not all bytes written is when the
// timeout value has elapsed and some number of bytes have
// been written.
if (bytes_written != length)
{
result = -1;
errno = ETIMEDOUT;
}
.
.
.
done:
if (result == -1)
result = errno;
if (fd != -1)
{
shutdown(fd, SHUT_RDWR);
close(fd);
}
return result;
}
I realize an obvious solution is for the client to send a byte count first and then send the bytes. I was wondering whether there was another way. Also, each message could be a different size.
You can packet your data with a length in the head. If the data doesn't match the length, the server can drop the data.

C sockets, Incomplete file transfer

I'm writing a C program to transfer a file of fixed size, a little over 2Mb, from a server to a client. I'm using TCP sockets on Linux and the code I wrote is the following:
Server (sender)
while (1) {
int nread = read(file, buffer, bufsize);
if (nread == 0) // EOF
break;
if (nread < 0) {
// handle errors
}
char* partial = buffer;
while (nread > 0) {
int nwrite = write(socket, partial, nread);
if (nwrite <= 0) {
// handle errors
}
nread -= nwrite;
partial += nwrite;
}
}
// file sent
shutdown(socket, SHUT_WR);
Client (receiver)
while (filesize > 0) {
nread = read(socket, buffer, bufsize);
if (nread == 0) {
// EOF - if we reach this point filesize is still > 0
// so the transfer was incomplete
break;
}
if (nread < 0) {
// handle errors
}
char* partial = buffer;
while (nread > 0) {
nwrite = write(file, partial, nread);
if (nwrite <= 0) {
// handle errors
}
nread -= nwrite;
partial += nwrite;
filesize -= nwrite;
}
}
if (filesize > 0) {
// incomplete transfer
// handle error
}
close(socket);
When testing the code on my laptop (both client and server "are" on localhost and the communication happen on the loopback interface), sometimes the client exits because read received an EOF, and not because it received all filesize bytes. Since i use a shutdown on the server, this should mean that there is no other data to read.
(Note that the server sent all the bytes and executed the shutdown correctly)
Can you explain me why this is happening?
Where are the missing bytes gone?
-----
EDIT 1 - Clarifications
Some users asked a couple of clarifications so i am posting the answers here:
The program is using TCP blocking sockets
The filesize is a fixed value and is hardcoded in both client and server.
No special socket options as, for example, SO_LINGER are enabled/used.
When the error occur, the server (sender) has already sent all the data and executed the shutdown correctly.
The error, as of today, never happened when testing the application with the client and the server on different machines (transfer over a real network interface and not a loopback interface)
EDIT 2
User Cornstalks pointed me to a really interesting article about the, non always reliable, behaviours of TCP.
The article, which is well worth a read, describe a few tricks useful when sending an unknown amount of data between TCP sockets. The tricks described are the followings:
Take advantage of the SO_LINGER option on the sender. This will help to keep the socket open, upon a call to close(2) or shutdown(2), until all the data has successfully been sent.
On the receiver, beware of pending readable data before the actual receiving loop. It could lead to an immediate reset being sent.
Take advantage of shutdown(2) to signal the receiver the the sender has done sending data.
Let the receiver know the size of the file that will be sent before actually sending the file.
Let the receiver acknowledge the sender that the receiving loop is over. This will help to prevent the sender from closing the socket too soon.
After reading the article, i upgraded my code to implement the tricks number 1 and 5.
This is how i implemented trick number 5:
Server (sender)
// sending loop ...
// file sent
shutdown(socket, SHUT_WR);
// wait acknowledgement from the client
ack = read(socket, buffer, bufsize);
if (ack < 0) {
// handle errors
}
Client (receiver)
// receiving loop..
if (filesize > 0) {
// incomplete transfer
// handle error
}
// send acknowledgement to the server
// this will send a FIN and trigger a read = 0 on the server
shutdown(socket, SHUT_WR);
close(socket);
What about tricks number 2, 3 and 4?
Trick number 2 is not needed because as soon as the server accepts the connection the application proceed to the file transfer. NO extra messages are exchanged.
Trick number 3 is already implemented
Trick number 4 is also already implemented. As mentioned earlier the file size is hardcoded, so there is no need to exchange it.
Did this solve my original problem?
NO my problem was not solved. The error is still happening, and as of today, it only happened when testing the application with both client and server on localhost.
What do you think?
Is there a way to prevent this?
You're:
assuming that read fills the buffer, even though
you're defending magnificently against write() not writing the entire buffer.
You need to do (1), and you don't need to do (2) because you're in blocking mode and POSIX assures that write() doesn't return until all the data is written.
A simple version of both loops:
while ((nread = read(inFD, buffer, 0, sizeof buffer)) > 0)
{
write(outFD, buffer, 0, nread);
}
if (nread == -1)
; // error
A more correct version would check the result of write() for errors of course.

close() is not closing socket properly

I have a multi-threaded server (thread pool) that is handling a large number of requests (up to 500/sec for one node), using 20 threads. There's a listener thread that accepts incoming connections and queues them for the handler threads to process. Once the response is ready, the threads then write out to the client and close the socket. All seemed to be fine until recently, a test client program started hanging randomly after reading the response. After a lot of digging, it seems that the close() from the server is not actually disconnecting the socket. I've added some debugging prints to the code with the file descriptor number and I get this type of output.
Processing request for 21
Writing to 21
Closing 21
The return value of close() is 0, or there would be another debug statement printed. After this output with a client that hangs, lsof is showing an established connection.
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (ESTABLISHED)
CLIENT 17747 root 12u IPv4 32754228 TCP localhost:47530->localhost:9980 (ESTABLISHED)
It's as if the server never sends the shutdown sequence to the client, and this state hangs until the client is killed, leaving the server side in a close wait state
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (CLOSE_WAIT)
Also if the client has a timeout specified, it will timeout instead of hanging. I can also manually run
call close(21)
in the server from gdb, and the client will then disconnect. This happens maybe once in 50,000 requests, but might not happen for extended periods.
Linux version: 2.6.21.7-2.fc8xen
Centos version: 5.4 (Final)
socket actions are as follows
SERVER:
int client_socket;
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
while(true) {
client_socket = accept(incoming_socket, (struct sockaddr *)&client_addr, &client_len);
if (client_socket == -1)
continue;
/* insert into queue here for threads to process */
}
Then the thread picks up the socket and builds the response.
/* get client_socket from queue */
/* processing request here */
/* now set to blocking for write; was previously set to non-blocking for reading */
int flags = fcntl(client_socket, F_GETFL);
if (flags < 0)
abort();
if (fcntl(client_socket, F_SETFL, flags|O_NONBLOCK) < 0)
abort();
server_write(client_socket, response_buf, response_length);
server_close(client_socket);
server_write and server_close.
void server_write( int fd, char const *buf, ssize_t len ) {
printf("Writing to %d\n", fd);
while(len > 0) {
ssize_t n = write(fd, buf, len);
if(n <= 0)
return;// I don't really care what error happened, we'll just drop the connection
len -= n;
buf += n;
}
}
void server_close( int fd ) {
for(uint32_t i=0; i<10; i++) {
int n = close(fd);
if(!n) {//closed successfully
return;
}
usleep(100);
}
printf("Close failed for %d\n", fd);
}
CLIENT:
Client side is using libcurl v 7.27.0
CURL *curl = curl_easy_init();
CURLcode res;
curl_easy_setopt( curl, CURLOPT_URL, url);
curl_easy_setopt( curl, CURLOPT_WRITEFUNCTION, write_callback );
curl_easy_setopt( curl, CURLOPT_WRITEDATA, write_tag );
res = curl_easy_perform(curl);
Nothing fancy, just a basic curl connection. Client hangs in tranfer.c (in libcurl) because the socket is not perceived as being closed. It's waiting for more data from the server.
Things I've tried so far:
Shutdown before close
shutdown(fd, SHUT_WR);
char buf[64];
while(read(fd, buf, 64) > 0);
/* then close */
Setting SO_LINGER to close forcibly in 1 second
struct linger l;
l.l_onoff = 1;
l.l_linger = 1;
if (setsockopt(client_socket, SOL_SOCKET, SO_LINGER, &l, sizeof(l)) == -1)
abort();
These have made no difference. Any ideas would be greatly appreciated.
EDIT -- This ended up being a thread-safety issue inside a queue library causing the socket to be handled inappropriately by multiple threads.
Here is some code I've used on many Unix-like systems (e.g SunOS 4, SGI IRIX, HPUX 10.20, CentOS 5, Cygwin) to close a socket:
int getSO_ERROR(int fd) {
int err = 1;
socklen_t len = sizeof err;
if (-1 == getsockopt(fd, SOL_SOCKET, SO_ERROR, (char *)&err, &len))
FatalError("getSO_ERROR");
if (err)
errno = err; // set errno to the socket SO_ERROR
return err;
}
void closeSocket(int fd) { // *not* the Windows closesocket()
if (fd >= 0) {
getSO_ERROR(fd); // first clear any errors, which can cause close to fail
if (shutdown(fd, SHUT_RDWR) < 0) // secondly, terminate the 'reliable' delivery
if (errno != ENOTCONN && errno != EINVAL) // SGI causes EINVAL
Perror("shutdown");
if (close(fd) < 0) // finally call close()
Perror("close");
}
}
But the above does not guarantee that any buffered writes are sent.
Graceful close: It took me about 10 years to figure out how to close a socket. But for another 10 years I just lazily called usleep(20000) for a slight delay to 'ensure' that the write buffer was flushed before the close. This obviously is not very clever, because:
The delay was too long most of the time.
The delay was too short some of the time--maybe!
A signal such SIGCHLD could occur to end usleep() (but I usually called usleep() twice to handle this case--a hack).
There was no indication whether this works. But this is perhaps not important if a) hard resets are perfectly ok, and/or b) you have control over both sides of the link.
But doing a proper flush is surprisingly hard. Using SO_LINGER is apparently not the way to go; see for example:
http://msdn.microsoft.com/en-us/library/ms740481%28v=vs.85%29.aspx
https://www.google.ca/#q=the-ultimate-so_linger-page
And SIOCOUTQ appears to be Linux-specific.
Note shutdown(fd, SHUT_WR) doesn't stop writing, contrary to its name, and maybe contrary to man 2 shutdown.
This code flushSocketBeforeClose() waits until a read of zero bytes, or until the timer expires. The function haveInput() is a simple wrapper for select(2), and is set to block for up to 1/100th of a second.
bool haveInput(int fd, double timeout) {
int status;
fd_set fds;
struct timeval tv;
FD_ZERO(&fds);
FD_SET(fd, &fds);
tv.tv_sec = (long)timeout; // cast needed for C++
tv.tv_usec = (long)((timeout - tv.tv_sec) * 1000000); // 'suseconds_t'
while (1) {
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused");
else if (errno != EINTR)
FatalError("select"); // tbd EBADF: man page "an error has occurred"
}
}
bool flushSocketBeforeClose(int fd, double timeout) {
const double start = getWallTimeEpoch();
char discard[99];
ASSERT(SHUT_WR == 1);
if (shutdown(fd, 1) != -1)
while (getWallTimeEpoch() < start + timeout)
while (haveInput(fd, 0.01)) // can block for 0.01 secs
if (!read(fd, discard, sizeof discard))
return TRUE; // success!
return FALSE;
}
Example of use:
if (!flushSocketBeforeClose(fd, 2.0)) // can block for 2s
printf("Warning: Cannot gracefully close socket\n");
closeSocket(fd);
In the above, my getWallTimeEpoch() is similar to time(), and Perror() is a wrapper for perror().
Edit: Some comments:
My first admission is a bit embarrassing. The OP and Nemo challenged the need to clear the internal so_error before close, but I cannot now find any reference for this. The system in question was HPUX 10.20. After a failed connect(), just calling close() did not release the file descriptor, because the system wished to deliver an outstanding error to me. But I, like most people, never bothered to check the return value of close. So I eventually ran out of file descriptors (ulimit -n), which finally got my attention.
(very minor point) One commentator objected to the hard-coded numerical arguments to shutdown(), rather than e.g. SHUT_WR for 1. The simplest answer is that Windows uses different #defines/enums e.g. SD_SEND. And many other writers (e.g. Beej) use constants, as do many legacy systems.
Also, I always, always, set FD_CLOEXEC on all my sockets, since in my applications I never want them passed to a child and, more importantly, I don't want a hung child to impact me.
Sample code to set CLOEXEC:
static void setFD_CLOEXEC(int fd) {
int status = fcntl(fd, F_GETFD, 0);
if (status >= 0)
status = fcntl(fd, F_SETFD, status | FD_CLOEXEC);
if (status < 0)
Perror("Error getting/setting socket FD_CLOEXEC flags");
}
Great answer from Joseph Quinsey. I have comments on the haveInput function. Wondering how likely it is that select returns an fd you did not include in your set. This would be a major OS bug IMHO. That's the kind of thing I would check if I wrote unit tests for the select function, not in an ordinary app.
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused"); // <--- fd unknown to function
My other comment pertains to the handling of EINTR. In theory, you could get stuck in an infinite loop if select kept returning EINTR, as this error lets the loop start over. Given the very short timeout (0.01), it appears highly unlikely to happen. However, I think the appropriate way of dealing with this would be to return errors to the caller (flushSocketBeforeClose). The caller can keep calling haveInput has long as its timeout hasn't expired, and declare failure for other errors.
ADDITION #1
flushSocketBeforeClose will not exit quickly in case of read returning an error. It will keep looping until the timeout expires. You can't rely on the select inside haveInput to anticipate all errors. read has errors of its own (ex: EIO).
while (haveInput(fd, 0.01))
if (!read(fd, discard, sizeof discard)) <-- -1 does not end loop
return TRUE;
This sounds to me like a bug in your Linux distribution.
The GNU C library documentation says:
When you have finished using a socket, you can simply close its file
descriptor with close
Nothing about clearing any error flags or waiting for the data to be flushed or any such thing.
Your code is fine; your O/S has a bug.
include:
#include <unistd.h>
this should help solve the close(); problem

Basic http proxy in c, problems

I am building a http proxy in c.
The proxy is supposed to filter some keywords in the URL and in the html content.
The first problem I have is with the send() function. When I am loading the page for the first time all is fine and dandy. And if I let the page finnish loading, the next request is also fine. But if I open www.google.com and start to type the "instant-feature" is making a new request before the last one is complete and i get the following error:
Program received signal SIGPIPE, Broken pipe.
0x00007ffff7b2efc2 in send () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) up
#1 0x0000000000401f1a in main () at net-ninny2.c:232
232 bytes_sent += send(i, buffer+bytes_sent, buffer_size-bytes_sent, 0);
The code-block that generates the error looks like this:
while(bytes_sent < buffer_size) {
bytes_sent += send(i, buffer+bytes_sent, buffer_size-bytes_sent, 0);
printf("* Bytes sent to Client: %d/%d\n", bytes_sent, buffer_size);
}
If you think it's relevant i'll be happy to provide more code.
My second problem is related to Http headers. Since I want to filter keywords in the html content, I don't want the content to be encoded. Google doesn't seem to agree with that and no matter what I put in the Accept-Encoding -header, I always get the content back encoded in gzip. Any ideas how to get rid of that?
EDIT:
I am also trying to use fork() to create child processes for the new connections, but that just throws a nasty error:
select: Interrupted system call
I have put it where I create a new file descriptor from a incoming connection:
if (i == listener) {
// New connection
remote_addr_len = sizeof remote_addr;
newfd = accept(listener, (struct sockaddr *)&remote_addr, &remote_addr_len);
if (newfd == -1) {
perror("accept");
}
else {
FD_SET(newfd, &master); // Add new connection to master set
if (newfd > fdmax) {
fdmax = newfd;
}
printf("* New connection from %s on "
"socket %d\n",
inet_ntop(remote_addr.ss_family,
get_in_addr((struct sockaddr*)&remote_addr),
remoteIP, INET6_ADDRSTRLEN), newfd);
if(!fork()) {
fprintf(stderr, "!fork()\n");
close(newfd);
exit(5);
}
}
}
But I'm guessing I am doing it all wrong.
Cheers!
For your first question, you will want to ignore the SIGPIPE signal:
signal(SIGPIPE, SIG_IGN);
See How to prevent SIGPIPEs (or handle them properly) for more detail. If you ignore the signal and the socket connection is reset, you will also want to handle the -1 error return value from send() appropriately.
For your second question, you may not be able to force Google to send data uncompressed, since Google may assume that all browsers can handle compressed data. You will probably need to embed a gzip decompressor in your proxy. It's certainly not fair to increase the bandwidth requirements of both ends just because you want to filter some keywords.

Sending while receiving in C

I've made a piece of code in what's on my server as multiple threads
The problem is that it doesn't send data while im receiving on the other socket.
so if i send something from to client 1 to client 2, client2 only receives if he sends something himself(jumps out of the recv function) .. how can i solve this ?
/* Thread*/
while (! stop_received) {
nr_bytes_recv = recv(s, buffer, BUFFSIZE, 0);
if(strncmp(buffer, "SEND", 4) == 0) {
char *message = "Text asads \n";
rv = send(users[0].s, message, strlen(message), 0);
rv = send(users[1].s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}else{
char *message = "Unknown command \n";
rv = send(s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}
}
To be a little more specific, there are a few types of I/O. What you're doing currently is called blocking i/o. In general that means that when you call send or recv the operation will "block" until it has completed.
In contrast to that there is what is known as non-blocking i/o. In this i/o model an operation will return immediately if it's unable to complete. Typically the select function is used with this i/o model.
You can see an example program here at the Select Tutorial. The full source code is at the bottom of the page.
As others have noted, your other option is to use threads.
Your code will block on the recv() call. Either write a multi-threaded application, or investigate the use of the select() function.
Put send and receive in separate threads.
I notice that you are using perror() (the POSIX error function), which leads me to believe you are using a POSIX operating system, which makes me suspect its GNU/Linux.
select() is portable, poll() is POSIX centric and epoll() is Linux centric. If using GNU/Linux, I strongly suggest avoiding select() and using:
poll() if you are polling only a few dozen file descriptors
epoll() if you need to scale to thousands of connections, and its available.
If your application need not be portable, and no requirement prohibits using extensions, use poll() or epoll(). Once you learn how select() works, you'll be very happy to get rid of it, especially for something that has to scale to serve many clients.
If portability is a requirement, see if either poll() or epoll() exist during your build configuration and use either in favor of select().
Note, epoll() did not appear until Linux 2.5(something), so its best to get used to using both.
You shoud separete the code in two threads, one transmitter and one receiver.
Somewthing like this:
/* 1st Thread*/
while (! stop_received) {
nr_bytes_recv = recv(s, buffer, BUFFSIZE, 0);
}
/* 2nd Thread*/
while (! stop_received) {
if(strncmp(buffer, "SEND", 4) == 0) {
char *message = "Text asads \n";
rv = send(users[0].s, message, strlen(message), 0);
rv = send(users[1].s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}else{
char *message = "Unknown command \n";
rv = send(s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}
}
The concurrency will bring some issues, like access to the buffer variable.
There are two ways of achieving the goal you want:
1.) implement the sending and receiving codes in different threads. but there will be some issues, like increasing no of clients might get you into troubles to handle the code. also there will be some some problem of concurrency (as mentioned by pcent).
you can go for no blocking sockets but i suggest not to do so, as i hope you dont want a cpu hog.
2.) The other way is to use of select() function which will let you monitor multiple sockets of different types at the same time. for more description of "select()" you can google it. :)

Resources