Determining the moment a TCP connection is really been closed in C - c

I need to perform some operations only after the time a TCP connection is fully closed, that's to say - all the data segments, as well as the finishing routine (FIN-ACK or RST) have been performed and done, and no packets will be sent on the wires.
Since closesocket() is not synchronous and could return before a full close of the connection and socket, I've used the SO_LINGER option to get the moment of closing.
According to the instructions in the MSDN for closesocket, and to the fact that my socket is non-blocking (and thus asynchronous), I wrote this code:
int ret;
/* config 2 secs of linger */
struct linger lng = {1, 2};
setsockopt(s, SOL_SOCKET, SO_LINGER, (const char*)&lng, sizeof(lng));
/* graceful close of TCP (FIN-ACK for both sides) */
shutdown(s, SD_BOTH);
/* linger routine for asynchronous sockets according to msdn */
do {
ret = closesocket(s);
} while (ret == SOCKET_ERROR && WSAGetLastError() == WSAEWOULDBLOCK);
/* my code to be run immediately after all the traffic */
printf("code to run after closing");
However, the closesocket call returns zero (success; instead of getting into the loop) and I see in Wireshark that my final printing is called before all the packets were sent, so - it looks like the linger isn't working.
By the way, the functions I used to open and connect the asynchronous socket were socket(), WSAIoctl() and its lpfnConnectEx() callback.
What's the reason that the lingered closesocket return before a full finish of the TCP connection? Is there a solution?

Related

one thread to exit them all

I have a main program that generates a few threads (using a while loop with accept() to get clients), and one that all it has to do is "listen to the keyboard" and when the user enters the word exit it will close the entire program.
first, the main program create the listening thread, then it enters a while loop that accept the clients. even if the listening thread get the exit input the loop is still stuck on accept.
i don't have to use a seperate thread to listen to the keyboard but i could'nt find a none blocking way that would work.
the listening thread:
DWORD WINAPI ListenService(LPVOID lpParam)
{
char buffer[5];
if (EOF == scanf("%s", buffer))
{
printf("faile get word from keyboard\n");
}
if (buffer[4] != '\0')
strcat(buffer, "\0");
if (STRINGS_ARE_EQUAL(buffer, "exit"))
{
return 999;
}
return -1;
}
in the main code:
ThreadListen = CreateThread(NULL,0,ListenService,NULL,0,&(ThreadId));
while(1)
{
SOCKET AcceptSocket = accept(MainSocket, NULL, NULL);
if (AcceptSocket == INVALID_SOCKET)
{
printf("Accepting connection with client failed, error %ld\n", WSAGetLastError());
CleanupWorkerThreads();
WSACleanup();
}
printf("Client Connected.\n");
}
There are many different ways you can handle this.
You can abort a blocked accept() by simply closing the listening socket.
Or, you can use select() with a short timeout to detect when a new client is waiting before then calling accept(). You can check your exit condition in between calls to select(). Just be aware that there is a small race condition where a client may disconnect between the time select() and accect() are called, so accept() may still block, if there are no more clients waiting.
Or, you can get rid of your threads and just use non-blocking sockets in a single thread, checking your exit condition periodically in between socket operations.
Or, you can use asynchronous sockets, using WSACreateEvent(), WSAEventSelect(), and WSAWaitForMultipleEvents() to detect socket activity. Then you can create an addition event to wait on for when the exit condition happens.
Or, you can use an I/O Completion Port to handle socket activity, and then you can post a custom exit packet into the IOCP queue using PostQueuedCompletionStatus() to "wake up" any waiting threads.

Socket Programming in c not raise any error if server is disconnected

I've written a socket server program that sends a message to the client every 200ms in one thread (fork child process) and waits for getting data from the client in another thread. the problem is, if the server connection disconnected, no error won't raise. I've tried using signals (SIGPIPE) and checking errrno variable.
When I disconnect my server hotspot, data send to socket and no error display.
here's the code:
int ListeningForConnection(int Sockfd) {
int clilen = sizeof(Cli_addr);
int SocketId;
while(1) {
printf("waiting for new client...\n");
if ((SocketId = accept(Sockfd, (struct sockaddr *)&Cli_addr, (socklen_t *)&clilen)) < 0) {
printf("%s", "ERROR on accept. errno:%d : %s\n", errno, strerror(errno));
close(Sockfd);
return
}
printf("opened new communication with client\n");
if (fork() == 0)
SendDataToClient(SocketId);
else
GetDataFromClient(SocketId);
// if any error happen, should waiting for new client to connect.
}
return 0;
SendDataToClient:
while(1) {
int n;
if ((n = send(socket, SendData ,strlen(SendData), MSG_CONFIRM)) < 0) {
printf("%s\n", "ERROR writing to socket");
return;
}
printf ("%s No:%d %s \n ",SendData,n,strerror(errno));
delay(200);
}
When I disconnect my server hotspot, data send to socket and no error display.
The fault is not your program but your expectations.
TCP is robust against temporary disrupting the connectivity between the client and server application, and this is what you actually do when disconnecting the hotspot. TCP will recover from this once the connectivity is established again, i.e it will buffer unacknowledged data locally and retry sending these for some time.
I've tried using signals (SIGPIPE) and checking errrno variable.
Errors and/or SIGPIPE will happen if the TCP connection is actually closed but not on temporary disruptions. Closing can be done explicitly by either client or server or it can be done implicitly if unrecoverable delivery problems are detected, for example due to timeout or because of TCP keep-alive. If this is not (yet) the case the write to the socket will either succeed directly or the writing will block (if the socket is blocking) if no more space is inside the socket buffer.
For early detection of disrupted connectivity on idle connections use TCP keep-alive. For detecting problems delivering data use timeout for unacknowledged data.

Understanding gap between socket creation and select() system call

I am aware that select() will be triggered whenever there is a data in the registered socket buffer.
what will happen if there is a delay between these two statements.
FD_SET(listener, &read_fds); // &
(select(fdmax+1, &read_fds, NULL, NULL, NULL) == -1)
what will happen if packet arrives between these two statements?
//create socket and listen for packets &
FD_SET(listener, &read_fds);
Assume that recv() is done once select is triggered.
What will happen if a packet arrives before the select() call is
made.?
does FD_ISSET still detects the packet which is already in
socket buffer or it will be detected only if new packet arrives and
select gets triggered?
Sample code:
// add the listener to the master set
FD_SET(listener, &master);
// keep track of the biggest file descriptor
fdmax = listener; // so far, it's this one
// main loop
for(;;) {
read_fds = master; // copy it
if (select(fdmax+1, &read_fds, NULL, NULL, NULL) == -1) {
perror("select");
exit(4);
}
// run through the existing connections looking for data to read
for(i = 0; i <= fdmax; i++) {
if (FD_ISSET(i, &read_fds)) { // we got one!!
Understanding gap between socket creation and select() system call
There is no gap between socket creation and select() in your question.
I am aware that select() will be triggered whenever there is a data in the registered socket buffer.
That's true for read events and it applies to the socket receive buffer of connected sockets. It also triggers when there is an inbound connection on a listening socket, or room in the socket send buffer for send events.
what will happen if there is a delay between these two statements.
FD_SET(listener, &read_fds); // &
(select(fdmax+1, &read_fds, NULL, NULL, NULL) == -1)
Nothing bad. Any event that occurs between them will still be signalled. But the first statement isn't a socket creation, contrary to your title.
what will happen if packet arrives between these two statements?
//create socket and listen for packets &
FD_SET(listener, &read_fds);
The socket send buffer exists from the moment the socket is created, so the data will go into the buffer, so when select() runs it will see that and trigger a read event.
Assume that recv() is done once select is triggered.
What will happen if a packet arrives before the select() call is made.?
The socket send buffer exists from the moment the socket is created, so the data will go into the buffer, so when select() runs it will see that and trigger a read event.
does FD_ISSET still detects the packet which is already in socket buffer
Yes.
or it will be detected only if new packet arrives and select gets triggered?
It will always be detected.
If data is waiting to be read, select will return immediately, and FD_ISSET will return true for the file descriptor that the data arrived on. It doesn't matter if data arrived before or after select was called.
select() completes immediately if one or more of the watched conditions is already active; otherwise it blocks until one or more of the watched conditions becomes active (or the timeout, if specified, expires).

Listen to multiple ports from one server

Is it possible to bind and listen to multiple ports in Linux in one application?
For each port that you want to listen to, you:
Create a separate socket with socket.
Bind it to the appropriate port with bind.
Call listen on the socket so that it's set up with a listen queue.
At that point, your program is listening on multiple sockets. In order to accept connections on those sockets, you need to know which socket a client is connecting to. That's where select comes in. As it happens, I have code that does exactly this sitting around, so here's a complete tested example of waiting for connections on multiple sockets and returning the file descriptor of a connection. The remote address is returned in additional parameters (the buffer must be provided by the caller, just like accept).
(socket_type here is a typedef for int on Linux systems, and INVALID_SOCKET is -1. Those are there because this code has been ported to Windows as well.)
socket_type
network_accept_any(socket_type fds[], unsigned int count,
struct sockaddr *addr, socklen_t *addrlen)
{
fd_set readfds;
socket_type maxfd, fd;
unsigned int i;
int status;
FD_ZERO(&readfds);
maxfd = -1;
for (i = 0; i < count; i++) {
FD_SET(fds[i], &readfds);
if (fds[i] > maxfd)
maxfd = fds[i];
}
status = select(maxfd + 1, &readfds, NULL, NULL, NULL);
if (status < 0)
return INVALID_SOCKET;
fd = INVALID_SOCKET;
for (i = 0; i < count; i++)
if (FD_ISSET(fds[i], &readfds)) {
fd = fds[i];
break;
}
if (fd == INVALID_SOCKET)
return INVALID_SOCKET;
else
return accept(fd, addr, addrlen);
}
This code doesn't tell the caller which port the client connected to, but you could easily add an int * parameter that would get the file descriptor that saw the incoming connection.
You only bind() to a single socket, then listen() and accept() -- the socket for the bind is for the server, the fd from the accept() is for the client. You do your select on the latter looking for any client socket that has data pending on the input.
In such a situation, you may be interested by libevent. It will do the work of the select() for you, probably using a much better interface such as epoll().
The huge drawback with select() is the use of the FD_... macros that limit the socket number to the maximum number of bits in the fd_set variable (from about 100 to 256). If you have a small server with 2 or 3 connections, you'll be fine. If you intend to work on a much larger server, then the fd_set could easily get overflown.
Also, the use of the select() or poll() allows you to avoid threads in the server (i.e. you can poll() your socket and know whether you can accept(), read(), or write() to them.)
But if you really want to do it Unix like, then you want to consider fork()-ing before you call accept(). In this case you do not absolutely need the select() or poll() (unless you are listening on many IPs/ports and want all children to be capable of answering any incoming connections, but you have drawbacks with those... the kernel may send you another request while you are already handling a request, whereas, with just an accept(), the kernel knows that you are busy if not in the accept() call itself—well, it does not work exactly like that, but as a user, that's the way it works for you.)
With the fork() you prepare the socket in the main process and then call handle_request() in a child process to call the accept() function. That way you may have any number of ports and one or more children to listen on each. That's the best way to really very quickly respond to any incoming connection under Linux (i.e. as a user and as long as you have child processes wait for a client, this is instantaneous.)
void init_server(int port)
{
int server_socket = socket();
bind(server_socket, ...port...);
listen(server_socket);
for(int c = 0; c < 10; ++c)
{
pid_t child_pid = fork();
if(child_pid == 0)
{
// here we are in a child
handle_request(server_socket);
}
}
// WARNING: this loop cannot be here, since it is blocking...
// you will want to wait and see which child died and
// create a new child for the same `server_socket`...
// but this loop should get you started
for(;;)
{
// wait on children death (you'll need to do things with SIGCHLD too)
// and create a new children as they die...
wait(...);
pid_t child_pid = fork();
if(child_pid == 0)
{
handle_request(server_socket);
}
}
}
void handle_request(int server_socket)
{
// here child blocks until a connection arrives on 'server_socket'
int client_socket = accept(server_socket, ...);
...handle the request...
exit(0);
}
int create_servers()
{
init_server(80); // create a connection on port 80
init_server(443); // create a connection on port 443
}
Note that the handle_request() function is shown here as handling one request. The advantage of handling a single request is that you can do it the Unix way: allocate resources as required and once the request is answered, exit(0). The exit(0) will call the necessary close(), free(), etc. for you.
In contrast, if you want to handle multiple requests in a row, you want to make sure that resources get deallocated before you loop back to the accept() call. Also, the sbrk() function is pretty much never going to be called to reduce the memory footprint of your child. This means it will tend to grow a little bit every now and then. This is why a server such as Apache2 is setup to answer a certain number of requests per child before starting a new child (by default it is between 100 and 1,000 these days.)

How to set socket timeout in C when making multiple connections?

I'm writing a simple program that makes multiple connections to different servers for status check. All these connections are constructed on-demand; up to 10 connections can be created simultaneously. I don't like the idea of one-thread-per-socket, so I made all these client sockets Non-Blocking, and throw them into a select() pool.
It worked great, until my client complained that the waiting time is too long before they can get the error report when target servers stopped responding.
I've checked several topics in the forum. Some had suggested that one can use alarm() signal or set a timeout in the select() function call. But I'm dealing with multiple connections, instead of one. When a process wide timeout signal happens, I've no way to distinguish the timeout connection among all the other connections.
Is there anyway to change the system-default timeout duration ?
You can use the SO_RCVTIMEO and SO_SNDTIMEO socket options to set timeouts for any socket operations, like so:
struct timeval timeout;
timeout.tv_sec = 10;
timeout.tv_usec = 0;
if (setsockopt (sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout,
sizeof timeout) < 0)
error("setsockopt failed\n");
if (setsockopt (sockfd, SOL_SOCKET, SO_SNDTIMEO, &timeout,
sizeof timeout) < 0)
error("setsockopt failed\n");
Edit: from the setsockopt man page:
SO_SNDTIMEO is an option to set a timeout value for output operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. In the current implementation, this timer is restarted each time additional data are delivered to the protocol, implying that the limit applies to output portions ranging in size from the low-water mark to the high-water mark for output.
SO_RCVTIMEO is an option to set a timeout value for input operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for input operations to complete. In the current implementation, this timer is restarted each time additional data are received by the protocol, and thus the limit is in effect an inactivity timer. If a receive operation has been blocked for this much time without receiving additional data, it returns with a short count or with the error EWOULDBLOCK if no data were received. The struct timeval parameter must represent a positive time interval; otherwise, setsockopt() returns with the error EDOM.
am not sure if I fully understand the issue, but guess it's related to the one I had, am using Qt with TCP socket communication, all non-blocking, both Windows and Linux..
wanted to get a quick notification when an already connected client failed or completely disappeared, and not waiting the default 900+ seconds until the disconnect signal got raised. The trick to get this working was to set the TCP_USER_TIMEOUT socket option of the SOL_TCP layer to the required value, given in milliseconds.
this is a comparably new option, pls see https://www.rfc-editor.org/rfc/rfc5482, but apparently it's working fine, tried it with WinXP, Win7/x64 and Kubuntu 12.04/x64, my choice of 10 s turned out to be a bit longer, but much better than anything else I've tried before ;-)
the only issue I came across was to find the proper includes, as apparently this isn't added to the standard socket includes (yet..), so finally I defined them myself as follows:
#ifdef WIN32
#include <winsock2.h>
#else
#include <sys/socket.h>
#endif
#ifndef SOL_TCP
#define SOL_TCP 6 // socket options TCP level
#endif
#ifndef TCP_USER_TIMEOUT
#define TCP_USER_TIMEOUT 18 // how long for loss retry before timeout [ms]
#endif
setting this socket option only works when the client is already connected, the lines of code look like:
int timeout = 10000; // user timeout in milliseconds [ms]
setsockopt (fd, SOL_TCP, TCP_USER_TIMEOUT, (char*) &timeout, sizeof (timeout));
and the failure of an initial connect is caught by a timer started when calling connect(), as there will be no signal of Qt for this, the connect signal will no be raised, as there will be no connection, and the disconnect signal will also not be raised, as there hasn't been a connection yet..
Can't you implement your own timeout system?
Keep a sorted list, or better yet a priority heap as Heath suggests, of timeout events. In your select or poll calls use the timeout value from the top of the timeout list. When that timeout arrives, do that action attached to that timeout.
That action could be closing a socket that hasn't connected yet.
connect timeout has to be handled with a non-blocking socket (GNU LibC documentation on connect). You get connect to return immediately and then use select to wait with a timeout for the connection to complete.
This is also explained here : Operation now in progress error on connect( function) error.
int wait_on_sock(int sock, long timeout, int r, int w)
{
struct timeval tv = {0,0};
fd_set fdset;
fd_set *rfds, *wfds;
int n, so_error;
unsigned so_len;
FD_ZERO (&fdset);
FD_SET (sock, &fdset);
tv.tv_sec = timeout;
tv.tv_usec = 0;
TRACES ("wait in progress tv={%ld,%ld} ...\n",
tv.tv_sec, tv.tv_usec);
if (r) rfds = &fdset; else rfds = NULL;
if (w) wfds = &fdset; else wfds = NULL;
TEMP_FAILURE_RETRY (n = select (sock+1, rfds, wfds, NULL, &tv));
switch (n) {
case 0:
ERROR ("wait timed out\n");
return -errno;
case -1:
ERROR_SYS ("error during wait\n");
return -errno;
default:
// select tell us that sock is ready, test it
so_len = sizeof(so_error);
so_error = 0;
getsockopt (sock, SOL_SOCKET, SO_ERROR, &so_error, &so_len);
if (so_error == 0)
return 0;
errno = so_error;
ERROR_SYS ("wait failed\n");
return -errno;
}
}

Resources