Disconnect idle client in C - c

I have a list of clients and their descriptors.
First, I would like to start a timer when each client connects to my server.
And my problem is that I want to disconnect clients that are inactive for x seconds (for example 120 seconds).
I just would like to have an idea of how to proceed (or with a code sample)

A method that works independently of the system chosen to listen to the client (fork, pthread, select) is to use poll with a timeout, this example works with stdin as file descriptor, you just have to adapt it to your environment, basically change:
struct pollfd pfd = {.fd = STDIN_FILENO, .events = POLLIN};
to
struct pollfd pfd = {.fd = fd, .events = POLLIN};
#include <stdio.h>
#include <stdlib.h>
#include <sys/poll.h>
#include <unistd.h>
int main(void)
{
struct pollfd pfd = {.fd = STDIN_FILENO, .events = POLLIN};
/**
* poll()
* Waits for one of a set of file descriptors to become ready to perform I/O.
* --------------------------------------------------------------------------
* Arguments:
* 1) Pointer to pollfd
* 2) Number of pollfds
* 3) Timeout in milliseconds
* --------------------------------------------------------------------------
* Returns:
* -1 on error
* 0 on timeout
* Another value if "ready"
*/
int ready = poll(&pfd, 1, 120000); // 120 seconds
if (ready == -1)
{
perror("poll");
exit(EXIT_FAILURE);
}
if (ready == 0)
{
// close(fd);
puts("Timeout");
// return from your pthread handler or exit your forked process here
}
if (pfd.revents & POLLIN)
{
// Handle client here
// ssize_t size = recv(...);
}
return 0;
}

In each client structure you need to keep track of the disconnect time.
In your main loop (I assume you are using poll or select or similar) you need to check the earliest disconnect time, calculate how far that is from now, and use that as the timeout. If the earliest disconnect time is 5 seconds after now, then the timeout should be 5 seconds.
If you get a timeout, then disconnect that client.
Optionally you may put the clients in a sorted list based on their timeout so it's easy to find the next one; optionally you may check if more than one client times out at the same time, etc; that's out of scope.

Related

How to use timerfd properly?

I use timerfd with zmq.
How can I use timerfd_create and timerfd_set to wait one second for the timer (https://man7.org/linux/man-pages/man2/timerfd_create.2.html)?
I have looked through the link but I still do not get how I can initilize a timer that waits one second per tick with create and set. This is exactly my task:
We start a timer with timerfd_create(), which is 1 / sec. ticking. When setting a timer with timer_set_(..) a
counter is simply incremented, which is decremented with every tick. When the counter reaches 0, the timer
has expired.
In this project we have a function timer _ set _(), where the timer is set with the function timerfd_create and timerfd_settimer(). I hope you can help me.
This is my progress (part of my code):
struct itimerspec timerValue;
g_items[n].socket = nullptr;
g_items[n].events = ZMQ_POLLIN;
g_items[n].fd = timerfd_create(CLOCK_REALTIME, 0);
if(g_items[n].fd == -1 ){
printf("timerfd_create() failed: errno=%d\n", errno);
return -1;
}
timerValue.it_value.tv_sec = 1;
timerValue.it_value.tv_nsec = 0;
timerValue.it_interval.tv_sec = 1;
timerValue.it_interval.tv_nsec = 0;
timerfd_settime(g_items[n].fd, 0, &timerValue, NULL);
The question appears about setting correctly the timeouts of the timer.
With the settings
timerValue.it_value.tv_sec = 1;
timerValue.it_value.tv_nsec = 0;
timerValue.it_interval.tv_sec = 1;
timerValue.it_interval.tv_nsec = 0;
You are correctly setting the initial timeout to 1s (field timerValue.it_value). But you are also setting a periodic interval of 1s, and you didn't mention the will to do it.
About the timeouts
This behavior is described by the following passage of the manual:
int timerfd_create(int clockid, int flags);
new_value.it_value specifies the initial expiration of the timer, in seconds and nanoseconds. Setting either field of new_value.it_value to a nonzero value arms the timer.Setting both fields of new_value.it_value to zero disarms the timer.
Setting one or both fields of new_value.it_interval to nonzero values specifies the period, in seconds and nanoseconds, for repeated timer expirations after the initial expiration. If both fields of new_value.it_interval are zero, the timer expires just once, at the time specified by new_value.it_value.
The emphasis on the last paragraph is mine, as it shows what to do in order to have a single-shot timer.
The benefits of timerrfd. How to detect timer expiration?
The main advantage provided by timerfd is that the timer is associated to a file descriptor, and this means that it
may be monitored by select(2), poll(2), and epoll(7).
The information contained in the other answer about read() is valid as well: let's just say that, even using functions such as select(), read() function will be required in order to consume data in the file descriptor.
A complete example
In the following demonstrative program, a timeout of 4 seconds is set; after that a periodic interval of 5 seconds is set.
The good old select() is used in order to wait for timer expiration, and read() is used to consume data (that is the number of expired timeouts; we will ignore it).
#include <stdio.h>
#include <sys/timerfd.h>
#include <sys/select.h>
#include <time.h>
int main()
{
int tfd = timerfd_create(CLOCK_REALTIME, 0);
printf("Starting at (%d)...\n", (int)time(NULL));
if(tfd > 0)
{
char dummybuf[8];
struct itimerspec spec =
{
{ 5, 0 }, // Set to {0, 0} if you need a one-shot timer
{ 4, 0 }
};
timerfd_settime(tfd, 0, &spec, NULL);
/* Wait */
fd_set rfds;
int retval;
/* Watch timefd file descriptor */
FD_ZERO(&rfds);
FD_SET(0, &rfds);
FD_SET(tfd, &rfds);
/* Let's wait for initial timer expiration */
retval = select(tfd+1, &rfds, NULL, NULL, NULL); /* Last parameter = NULL --> wait forever */
printf("Expired at %d! (%d) (%d)\n", (int)time(NULL), retval, read(tfd, dummybuf, 8) );
/* Let's wait (twice) for periodic timer expiration */
retval = select(tfd+1, &rfds, NULL, NULL, NULL);
printf("Expired at %d! (%d) (%d)\n", (int)time(NULL), retval, read(tfd, dummybuf, 8) );
retval = select(tfd+1, &rfds, NULL, NULL, NULL);
printf("Expired at %d! (%d) (%d)\n", (int)time(NULL), retval, read(tfd, dummybuf, 8) );
}
return 0;
}
And here it is the output. Every row contains also the timestamp, so that the actual elapsed time can be checked>
Starting at (1596547762)...
Expired at 1596547766! (1) (8)
Expired at 1596547771! (1) (8)
Expired at 1596547776! (1) (8)
Please note:
We just performed 3 reads, for test
The intervals are 4s + 5s + 5s (initial timeout + two interval timeouts)
8 bytes are returned by read(). We ignored them, but they contained the number of the expired timeouts
With timerfds, the idea is that a read on the fd will return the number of times the timer has expired.
From the timerfd_settime(2) man page:
Operating on a timer file descriptor
The file descriptor returned by timerfd_create() supports the following operations:
read(2)
If the timer has already expired one or more times since its settings were last modified using
timerfd_settime(), or since the last successful read(2), then the buffer given to read(2) returns
an unsigned 8-byte integer (uint64_t) containing the number of expirations that have occurred.
If no timer expirations have occurred at the time of the read(2), then the call either blocks
until the next timer expiration, or fails with the error EAGAIN if the file descriptor has been
made nonblocking (via the use of the fcntl(2) F_SETFL operation to set the O_NONBLOCK flag).
So, basically, you create an unsigned 8 byte integer (uint64_t on Linux), and pass that to your read call.
uint64_t buf;
int expired = read( g_items[n].fd, &buf, sizeof(uint64_t));
if( expired < 0 ) perror("read");
Something like that, if you want to block until you get an expiry.

How to change TCP Server In C from Blocking Mode to Non-Blocking Mode when it's already blocking Or How to shutdown a blocking TCP Server properly?

I have no problems with running the TCP Server and I like the fact that it's in blocking to avoid useless loops and sleeping code and useless cpu cycles.
The problem happens when shutting it down in Linux environment, it stays on, until the connected user sends something, then it turns off.
I figured it's because it's blocking even though the endless while loop is set to exit. But when it's blocking changing the socket id's to NON_BLOCKING doesn't help at all, most likely has to be set to NON_BLOCKING before the block occurs.
#include <pthread.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h> /* Added for the nonblocking socket */
#define LISTEN_MAX 10 /* Maximum clients that can queue up */
#define LISTEN_PORT 32000
#define MAX_COMMANDS_AT_ONCE 4000
#define NANO_SECOND_MULTIPLIER 1000000 // 1 millisecond = 1,000,000 Nanoseconds
//Global so I can access these where I shut the threads off.
int listenfd, connfd; //sockets that must be set to non-blocking before exiting
int needQuit(pthread_mutex_t *mtx)
{
switch(pthread_mutex_trylock(mtx)) {
case 0: /* if we got the lock, unlock and return 1 (true) */
pthread_mutex_unlock(mtx);
return 1;
case EBUSY: /* return 0 (false) if the mutex was locked */
return 0;
}
return 1;
}
/* this is run on it's own thread */
void *tcplistener(void *arg)
{
pthread_mutex_t *mx = arg;
//keyboard event.
SDLKey key_used;
struct timespec ts;
//int listenfd,connfd,
int n,i, ans;
struct sockaddr_in servaddr,cliaddr;
socklen_t clilen;
pid_t childpid;
char mesg[MAX_COMMANDS_AT_ONCE];
listenfd=socket(AF_INET,SOCK_STREAM,0);
bzero(&servaddr,sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr=htonl(INADDR_ANY);
servaddr.sin_port=htons(LISTEN_PORT);
int option = 1;
if(setsockopt(listenfd, SOL_SOCKET,SO_REUSEADDR,(char*)&option,sizeof(option)) < 0)
{
printf("setsockopt failed\n");
close(listenfd);
}
bind(listenfd,(struct sockaddr *)&servaddr,sizeof(servaddr));
listen(listenfd,LISTEN_MAX);
while( !needQuit(mx) )
{
clilen=sizeof(cliaddr);
connfd = accept(listenfd,(struct sockaddr *)&cliaddr,&clilen);
while( !needQuit(mx) )
{
n = recv(connfd,mesg,MAX_COMMANDS_AT_ONCE,0);
if(n == 0 || n == -1) break;
//...Do Stuff here with mesg...
}
}
close(connfd);
}
close(connfd);
close(listenfd);
return NULL;
}
int main(int argc, char *argv[])
{
/* this variable is our reference to the thread */
pthread_t tcp_listener_thread;
pthread_mutex_t mxq; /* mutex used as quit flag */
/* init and lock the mutex before creating the thread. As long as the
mutex stays locked, the thread should keep running. A pointer to the
mutex is passed as the argument to the thread function. */
pthread_mutex_init(&mxq,NULL);
pthread_mutex_lock(&mxq);
/* create a hread which executes tcplistener(&x) */
if(pthread_create(&tcp_listener_thread, NULL, tcplistener, &mxq)) {
fprintf(stderr, "Error creating TCP Listener thread\n");
//clear thread for tcp listener on exit.
/* unlock mxq to tell the thread to terminate, then join the thread */
fcntl(listenfd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state */
fcntl(connfd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state */
pthread_mutex_unlock(&mxq);
pthread_join(tcp_listener_thread,NULL);
pthread_cancel(tcp_listener_thread);
pthread_exit(NULL);
return 0;
}
//End of the TCP Listener thread.
// Waits 500 milliseconds before shutting down
struct timespec ts;
ts.tv_sec = 0;
ts.tv_nsec = 500 * NANO_SECOND_MULTIPLIER;
nanosleep((&ts, NULL);
//Forces a shutdown of the program and thread.
//clear thread for tcp listener on exit.
/* unlock mxq to tell the thread to terminate, then join the thread */
fcntl(listenfd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state */
fcntl(connfd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state */
pthread_mutex_unlock(&mxq);
pthread_join(tcp_listener_thread,NULL);
pthread_cancel(tcp_listener_thread);
pthread_exit(NULL);
return 0;
}
I tried the fix EJP suggested like so, still hanging..
I've made connfd and listernfd both global scope
pthread_mutex_unlock(&mxq);
close(connfd); //<- this
close(listenfd); //<-- this
pthread_join(tcp_listener_thread,NULL);
pthread_cancel(tcp_listener_thread);
pthread_exit(NULL);
To unblock accept(), just close the listening socket. Make sure the code around accept() handles is correctly.
To unblock recv(), shutdown the receiving socket for input. That will cause recv() to return zero, which again must be handled correctly. Or else just close the socket as above, which might be better if you want the receive code to know that you're closing the application.
Your listener thread will indeed block in the accept().
The nasty way to fix this (almost) is to send a signal to the listener thread with pthread_kill(). This causes accept() to return with errno == EINTR, which you test for and then return.
However, that has a race condition: if the signal is received between testing the while (!needQuit(mx)) condition and entering the accept() then it'll be lost and the accept() will block again.
One correct way to solve this is to use something like select() and a pipe. You select for read over the pipe and the socket. When the main thread wants the listener thread to exit it writes a byte to the pipe. The listener thread's select() call returns either when a byte is readable from the pipe (in which case it exits) and/or when a client can be accepted.
Non-blocking sockets are primarily used to multiplex lots of sockets into one event loop (i.e. thread). That's a good idea for server scalability, but not necessary here.

C: Writing a proper time-out

Upon closely scouring through resources, I'm still not entirely sure how to write a proper and usable timer function in C. I am not working with threads (or parallelizable code). I simply want to write a stopwatch function that I can use to trigger a bit of code after a small amount of time has passed.
This is a very common use of a timer, in the situation of "time-out", where I have a client-server set up where the client is sending some data (UDP style with sendto(...) and recvfrom(...)). I have written my system so that the client sends a chunk of data in a packet struct I have defined, and the server processes it via CRC then sends back an acknowledgement packet (ACK) that the msg was received uncorrupted. However, I want to implement a time-out, where if the client does not receive an ACK in a certain period of time, the client resends the data chunk (of course the server is rigged to check for duplicates). I want to nest this bit of timer code in the client, and for some reason do not think this should be so difficult.
I have dug up old signal handling code from work I had done long ago, as this seems to be the only way I commonly see mentioned as a solution, can someone please guide me as to how I can use the following signal handling code to not just receive a timed signal but trigger an action of some sort. Conceptually, I feel it would be: "send data, start timer, after timer expires execute a resend, reset timer...repeat until that ACK received". Better yet, would be an easier way of writing a timer function, but it doesn't look like there's much hope for that given C is a low-level language....
#include <sys/time.h>
#include <errno.h>
#include <stdio.h>
#include <signal.h>
extern char *strsignal(int sig);
void timer_handler(int a)
{
// handle signal
printf(">>>> signal caught\n");
printf(">>>> int parameter = %s\n", (char*) strsignal(a));
}
int main(int argc, char* argv[])
{
int retval;
struct itimerval timerValue;
struct itimerval oldTimerValue;
struct sigaction action;
action.sa_handler = &timer_handler;
action.sa_flags = SA_NODEFER;
// initialize timer parameters: expires in 5 seconds
timerValue.it_interval.tv_sec = 5;
timerValue.it_interval.tv_usec = 0;
timerValue.it_value.tv_sec = 5;
timerValue.it_value.tv_usec = 0;
// install signal handler to catch SIGALRM
//signal(SIGALRM, timer_handler);
sigaction(SIGALRM, &action, NULL);
retval = setitimer(ITIMER_REAL, &timerValue, &oldTimerValue);
if (-1 == retval)
perror("Could not set timer");
while(1);
return 0;
}
Xymostech provided the exact function I needed and after consulting the API for "select", which includes a small usage example, I modified the code there to fit what I needed and wrote a socket timer (for reads, those it's pretty simple to extend towards writes and such, as "select" has parameters for enabling this kind of check). Make sure you've included the following libraries as specified by the "select" API:
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/time.h>
#include <stdio.h>
The following is the waittoread(...) function I created from the API example and works pretty well. This works well in the domain of my specific problem, however, if one is looking for a more generalized timer (i.e. not just for timing socket read and writes, or file descriptors) please consult signal handling (somewhat in the spirit of the code I posted in my initial question).
#define S1READY 0x01 // necessary for the function's bitwise OR operation
int waittoread(int s1, int timeout_value){
fd_set fds; // create set of sockets to be waited on
struct timeval timeout; // the time-out value
int rc; // # of sockets that are ready before timer expires
int result;
/* Set time limit. */
timeout.tv_sec = timeout_value;
timeout.tv_usec = 0;
/* Create a descriptor set containing the socket. */
FD_ZERO(&fds); // MACRO to reset the socket storage set so new ones can be added
FD_SET(s1, &fds); // add the socket descriptor into the socket set to wait on
rc = select(sizeof(fds)*4, &fds, NULL, NULL, &timeout); // build the socket-wait system
// another way of calling select that would be a better approach:
// rc = select(s1 + 1), &fds, NULL, NULL, &timeout);
if (rc==-1) {
perror("Error: Call to select failed.");
return -1;
}
result = 0;
if (rc > 0){
if (FD_ISSET(s1, &fds)) result |= S1READY; // if the result is non-zero, perform a BIT-wise OR to extract the true socket count #
}
return result;
}

select for multiple non-blocking connections

I have a single threaded program. It sends message to four destinations every five seconds. I don't want connect() to be blocked. So I am writing my program like this:
int j, rc, non_blocking=1, sockets[4], max_fd=0;
struct sockaddr server=get_server_addr();
fd_set fdset;
const struct timeval conn_timeout = { 2, 0 }; /* 2 seconds */
for (j=0; j<4; ++j)
{
sockets[j]=socket( AF_INET, SOCK_STREAM, 0 );
ioctl(sockets[j], FIONBIO, (char *)&non_blocking);
connect(sockets[j], &server, sizeof (server));
}
/* prepare fd_set */
FD_ZERO ( &fdset );
for (j=0;j<4;++j)
{
if (sockets[j] != -1 )
{
FD_SET ( sockets[j], &fdset );
if ( sockets[j] > max_fd )
{
max_fd = sockets[j];
}
}
}
rc=select(max_fd + 1, NULL, &fdset, NULL, &conn_timeout );
if(rc > 0)
{
for (j=0;j<4;++j)
{
if(sockets[j]!=-1 && FD_ISSET(sockets[j],&fdset))
{
/* send() */
}
}
}
/* close all valid sockets */
However, it seems select() returns immediately after ONE file descriptor is ready instead of blocking for conn_timeout (2 seconds). So in this case how can I achieve my targets?
The program continues if all sockets are ready.
The program can block there for 2 seconds if any one of sockets are not ready.
Yeah, select was designed on the assumption that you would want to service each socket as soon as it became ready.
If I understand what you're trying to do, then the simplest way to accomplish it will be to remove each socket from the fdset as it becomes ready. If there are any sockets left in the set, use gettimeofday to adjust the timeout downward, and call select again. When the set is empty, all four sockets are usable and you can proceed.
There are three basic approaches:
If you want to stay strictly portable you need to iterate:
calculate end time from current time and timeout of your choice
Cycle:
-- Create fdset with those fds not yet ready
-- calculate max time to wait
-- select()
-- remeber those fds that are now ready
-- break if end time reached or all fds ready
End cycle
Now you have knowledge of the ready fds and the elapsed time
If you want to stay portable, but can use threads:
start n threads
select on one fd per thread
join all threads
If you do not need to be portable: Most OSes have a facility for such a situation, e.g. Windows/.NET has WaitAll (together with async send and an event)
I don't see the connection between your stated targets and your stated problem. You are correct in saying that select() blocks until at least one socket is ready, but according to target #2 above that is exactly what you want. There's nothing in your stated targets about blocking until all four sockets are ready at the same time.
You should also note that sockets are almost always ready for writing, unless the send buffer is full, which means the receiver's receive buffer is full, which means the receiver is slower than the sender. So using select() alone as the underlying write timer isn't a good idea.

Running a simple TCP server with poll(), how do I trigger events "artificially"?

I have a fairly basic TCP server keeping track of a couple connections and recv'ing data when it's available. However, I'd like to artificially trigger an event from within the program itself, so I can send my TCP server data as if it came from sock1 or sock2, but in reality came from somewhere else. Is this possible, or at all clear?
struct pollfd fds[2];
fds[0].fd = sock1;
fds[1].fd = sock2;
while (true) {
int res = poll(fds, 2, timeout);
if ((fds[0].revents & POLLIN)){
//ready to recv data from sock1
}
if ((fds[1].revents & POLLIN)){
//ready to recv data from sock2
}
}
Create a pair of connected sockets (see socketpair(2)), and wait for events on one of the sockets in your poll loop. When you want to wake up the poll thread, write a single byte to the other socket. When the polling loop wakes up, read the byte, do whatever was required and continue.
This is more like a design question -- your polling loop should probably abstract the poll method to allow trapping on other external signals, like from kill -USR1.
If you really want to trigger port traffic, you'll likely want to use netcat to send data to the socket.
I would consider something like this:
struct pollfd fds[2];
fds[0].fd = sock1;
fds[0].events = POLLIN;
fds[1].fd = sock2;
fds[1].events = POLLIN;
for (;;) {
int result = poll(fds, 2, timeout);
if (result) {
if ((fds[0].revents & POLLIN)){
/* Process data from sock1. */
}
if ((fds[1].revents & POLLIN)){
/* Process data from sock2. */
}
} else {
/* Do anything else you like, including
processing data that wasn't from a
real socket. */
}
}
Notes:
don't forget to initialise your events field
for(;;) is more idiomatic C than while(true) and doesn't require true to be defined

Resources