This is not a homework problem, I promise.
I'm writing a time series database implementation as a way to learn C.
I have a client/server pair that I've written. The server is currently an echo server listening to a socket on a port. The client connects to that port and sends lines of input to it. It uses readline to get input, sends it to the client socket, recvs a line from the client socket, and prints the line to the terminal. Rinse, repeat. The client returns when it gets an EOF from recv, at which point it knows the connection is closed.
The problem is that readline is blocking, such that if the server process is killed, i.e. I SIGINT it, the client is still blocking on readline. It isn't until after it sends, then recvs an EOF, that it will know the server is gone.
What I want to happen is for the client to get signaled if there's an EOF on recv and immediately exit.
What I think I need to do is to create 3 pthreads - 2 for a network client (send and recv) and 1 for the terminal. The terminal calls readline and blocks. It accepts input, and then uses a pthread_cond_t to signal the waiting network client send thread to send. The network client recv thread is constantly recving, which will block. If it is an EOF, it raises SIGINT, the handler to which pthread_kills all 3 threads, fprintfs something like "Connection closed by server.", and calls exit (yes, I know exit will terminate all threads anyways - it's an exercise in cleanliness and to see if I understand C).
Is this the appropriate approach? Obviously network client terminals do this all the time. What's the right approach?
If I'm understanding you correctly you would like to exit during a readline, which is before your write()/send(), when the server dies. Your approach is fine. The only way you will catch if the server has died is when you try to write()/send() you'll get a SIGPIPE which will cause an exit right after it. Because by default, no packets are sent on the connection unless there is data to send or acknowledge. So, if you are simply waiting for data from the peer, there is no way to tell if the peer has silently gone away, or just isn't ready to send/receive any more data yet. So you would need some poller for the SIGPIPE in a separate thread and exit when you get it. You could use this readline function to help you get started with the checking.
/**
* Simple utility function that reads a line from a file descriptor fd,
* up to maxlen bytes -- ripped from Unix Network Programming, Stevens.
*/
int
readline(int fd, char *buf, int maxlen)
{
int n, rc;
char c;
for (n = 1; n < maxlen; n++) {
if ((rc = read(fd, &c, 1)) == 1) {
*buf++ = c;
if (c == '\n')
break;
} else if (rc == 0) {
if (n == 1)
return 0; // EOF, no data read
else
break; // EOF, read some data
} else
return -1; // error
}
*buf = '\0'; // null-terminate
return n;
}
This question has been answered before. I got Stevens and chapter 5 suggested interleaving the readline call with a socket read to look for EOF. I did a Google search for libreadline and select/poll and saw this:
Interrupting c/c++ readline with signals
There's a way to use interleave I/O so libreadline doesn't just block:
http://www.delorie.com/gnu/docs/readline/rlman_41.html
Thanks #arayq2 and #G-- for suggesting Stevens!
Related
We dont want anything to be printed after user interrupt via CTRL-C. We have tried adding __fpurge as well fflush inside sigInt signal handler, but it is not working.
How can I clear buffered stdout values immediately? I have came across few similar thread but no where i could able to find a working solution .
Few additional info's:
Inside sigInt signal handler even after adding exit(0) , buffer content are getting printed but the processor is killed .
added exit(0) to narrow down the issue , i dont want to kill the processor
I know the above is expected behavior , not sure how to avoid it .
Consider this edited example -- edited; this one does not exit the process:
#define _POSIX_C_SOURCE 200809L /* For nanosleep() */
#include <unistd.h>
#include <stdlib.h>
#include <termios.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <signal.h>
#include <string.h>
#include <errno.h>
#include <time.h>
#include <stdio.h>
static void exit_handler(int signum)
{
int fd, result;
/* If the standard streams are connected to a tty,
* tell the kernel to discard already buffered data.
* (That is, in kernel buffers. Not C library buffers.)
*/
if (isatty(STDIN_FILENO))
tcflush(STDIN_FILENO, TCIOFLUSH);
if (isatty(STDOUT_FILENO))
tcflush(STDOUT_FILENO, TCIOFLUSH);
if (isatty(STDERR_FILENO))
tcflush(STDERR_FILENO, TCIOFLUSH);
/* Redirect standard streams to /dev/null,
* so that nothing further is output.
* This is a nasty thing to do, and a code analysis program
* may complain about this; it is suspicious behaviour.
*/
do {
fd = open("/dev/null", O_RDWR);
} while (fd == -1 && errno == EINTR);
if (fd != -1) {
if (fd != STDIN_FILENO)
do {
result = dup2(fd, STDIN_FILENO);
} while (result == -1 && (errno == EINTR || errno == EBUSY));
if (fd != STDOUT_FILENO)
do {
result = dup2(fd, STDOUT_FILENO);
} while (result == -1 && (errno == EINTR || errno == EBUSY));
if (fd != STDERR_FILENO)
do {
result = dup2(fd, STDERR_FILENO);
} while (result == -1 && (errno == EINTR || errno == EBUSY));
if (fd != STDIN_FILENO && fd != STDOUT_FILENO && fd != STDERR_FILENO)
close(fd);
}
}
static int install_exit_handler(const int signum)
{
struct sigaction act;
memset(&act, 0, sizeof act);
sigemptyset(&act.sa_mask);
act.sa_handler = exit_handler;
act.sa_flags = 0;
if (sigaction(signum, &act, NULL) == -1)
return errno;
return 0;
}
int main(void)
{
if (install_exit_handler(SIGINT)) {
fprintf(stderr, "Cannot install signal handler: %s.\n", strerror(errno));
return EXIT_FAILURE;
}
while (1) {
struct timespec t = { .tv_sec = 0, .tv_nsec = 200000000L };
printf("Output\n");
fflush(stdout);
nanosleep(&t, NULL);
}
/* Never reached. */
return EXIT_SUCCESS;
}
When the process receives a SIGINT signal, it will first flush whatever is in kernel terminal buffer, then redirect the standard streams to /dev/null (i.e., nowhere).
Note that you'll need to kill the process by sending it the TERM or KILL signal (i.e. killall ./yourprogname in another terminal).
When you are running the verbose process over a remote connection, quite a lot of information may be in flight at all times. Both the local machine and the remote machine running the process will have their socket buffers nearly full, so the latency may be much larger than ordinarily -- I've seen several second latencies in this case even on fast (GbE) local networks.
This means that propagating the signal from the local machine to the remote machine will take a measurable time; in worst cases on the order of seconds. Only then will the remote process stop outputting data. All pending data will still have to be transmitted from the remote machine to the local machine, and that may take quite a long time. (Typically, the bottleneck is the terminal itself; in most cases it is faster to minimize the terminal, so that it does not try to render any of the text it receives, only buffers it internally.)
This is why Ctrl+C does not, and cannot, stop remote output instantaneously.
In most cases, you'll be using an SSH connection to the remote machine. The protocol does not have a "purge" feature, either, that might help here. Many, myself included, have thought about it -- at least my sausage fingers have accidentally tabbed to the executable file instead of the similarly named output file, and not only gotten the terminal full of garbage, but the special characters in binary files sometimes set the terminal state (see e.g. xterm control sequences, ANSI escape codes) to something unrecoverable (i.e., Ctrl+Z followed by reset Enter does not reset the terminal back to a working state; if it did, kill -KILL %- ; fg would stop the errant command in Bash, and get you your terminal back), and you need to break the connection, which will also terminate all processes started from the same terminal running remotely in the background.
The solution here is to use a terminal multiplexer, like GNU screen, which allows you to connect to and disconnect from the remote machine, without interrupting an existing terminal connection. (To put it simply, screen is your terminal avatar on the remote machine.)
First up, a quote from the C11 standard, emphasis mine:
7.14.1.1 The signal function
5 If the signal occurs other than as the result of calling the abort or raise function, the behaviour is undefined if [...] the signal handler calls any function in the standard library other than the abort function, the _Exit function, the quick_exit function, or the signal function with the first argumentt equal to the signal number corresponding to the signal that caused the invocation of the handler.
This means calling fflush is undefined behaviour.
Looking at the functions you may call, abort and _Exit both leave the flushing of buffers implementation-defined, and quick_exit calls _Exit, so you are out of luck as far as far as the standard is concerned since I could not find the implementation's definition on their behaviour for Linux. (Surprise. Not.)
The only other "terminating" function, exit, does flush the buffers, and you may not call it from the handler in the first place.
So you have to look at Linux-specific functionality. The man page to _exit makes no statement on buffers. The close man page warns against closing file descriptors that may be in use by system calls from other threads, and states that "it is not common for a filesystem to flush the buffers when the stream is closed", meaning that it could happen (i.e. close not guaranteeing that unwritten buffer contents are actually discarded).
At this point, if I were you, I would ask myself "is this such a good idea after all"...
The problem is that neither Posix nor Linux library declares that fpurge nor __fpurge to be safe in a signal handler function. As explained by DevSolar, C language itsel does not declare many safe functions for standard library (at least _Exit, but Posix explicitely allows close and write. So, you can always close the underlying file descriptor which should be 1:
void handler(int sig) {
static char msg[] = "Interrupted";
write(2, msg, sizeof(msg) - 1); // carefully use stderr here
close(1); // foo is displayed if this line is commented out
_Exit(1);
}
int main() {
signal(SIGINT, handler);
printf("bar");
sleep(15);
return 0;
}
When I type Ctrl-C during the sleep it gives as expected:
$ ./foo
^CInterrupted with 2
$
The close system call should be enough, because as it closes the underlying file descriptor. So even if there are later attemps to flush stdout buffer, they will write on a closed file descriptor as as such have no effect at all. The downside is that stdout has been redirected, the program should store the new value of the underlying file descriptor in a global variable.
If you do kill(getpid(), SIGKILL); with in the signal handler (which is async-safe), you would get killed immediately by the OS (as you wanted to exit(0) anyway). Further output is not to be expected any more.
Only problem: you won't be able to clean up poperly, exit gracefully afterwards in the main thread. If you can afford that...
I've a client/server program, now I want to handle signals. When the client closes the connection (if for example I close the terminal), the server has to handle a SIGPIPE, am I right? I'd like to implement something like this. Is it possible?
server.c:
void function(){
printf("...");
read(socket,buff,size);
//IF THE CLIENT CLOSES, THE SERVER RECEIVES A SIGPIPE
...the resting part of the scheduled code should be ignored if sigpipe is received, and the program should begin from where I wrote on the handler of the sigpipe...
printf("not working"); //this should be ignored, but it's printed 2 times immediatly, and when I've finished the actions indicated in the function by the handler, it prints it another time, because the program counter restarts from here...
}
void sigpipehandler(){
close(socket);
main(); //I'd like that the program restarts from the main when I've received a SIGPIPE. It restarts from the main, but only after having printed "not working" two times...
}
int main(){
sigPipe.sa_sigaction = &sigpipehandler;
sigPipe.sa_flags = SA_SIGINFO;
sigaction(SIGPIPE, &sigpipehandler, NULL);
...code...
}
Converting comments into an answer.
Note that you only get SIGPIPE when you write to a pipe where there is no process with the read end of the pipe open. You get EOF (zero bytes read) when you read from a pipe that has no process with the write end of the pipe open.
So, if I change the read() with a write() in the example. How can I handle the SIGPIPE?
Simplest is to ignore SIGPIPE (signal(SIGPIPE, SIG_IGN)) and then monitor the return value from write(). If it comes back with -1 and errno set to EINTR, you can assume you got interrupted by some signal, and most probably a SIGPIPE, especially if you don't have any other signal handling set. Of course, you should be looking at the return value from write() — and read() — anyway.
Alternatively, if you want an explicit SIGPIPE handler, then you definitely do not want to recursively call main() from your signal handler. You can write a loop in main(), and have the signal handler set a flag which you test in the loop. Per Standard C, about the only thing you can do in a signal handler is modify a variable or exit.
static volatile sigatomic_t sig_recvd = 0;
static int sock_fd = -1;
void sigpipehandler(int signum)
{
close(sock_fd);
sock_fd = -1;
sig_recvd = signum;
}
int main(void)
{
sigPipe.sa_sigaction = &sigpipehandler;
sigPipe.sa_flags = SA_SIGINFO;
sigemptyset(&sigPipe.sa_mask);
sigaction(SIGPIPE, &sigpipehandler, NULL);
int done = 0;
while (!done)
{
if (sock_fd == -1)
{
if (sig_recvd != 0)
{
...report signal received...
sig_recvd = 0;
}
...(re)open socket on sock_fd...
}
...code as before - sets done = 1 when loop should terminate...
}
return 0;
}
Note that naming a variable the same as a system call (socket in your code) is treading on thin ice; hence, I renamed it sock_fd. A global variable called socket would be a really bad idea.
When reading process quits, how do i determine it from writing process before write call blocks ? Normally when read side closes, write call on the write side should return an error right?
client
while(!timeout)
{
read(fd, message, BUFFER_SIZE);
}
server
while(1)
{
length = write(fd, message, strlen(message));
if(length <= 0)
{
break;
}
}
Read carefully fifo(7):
When a process tries to write to a FIFO that is not opened for read
on the other side, the process is sent a SIGPIPE signal.
You could -and probably should- use poll(2) to test dynamic readability or writability of a fifo or pipe or socket file descriptor (see this answer about a simplistic event loop using poll). See also write(2) & Advanced Linux Programming.
I have a tcp chat program: server.c and client.c.
The server is in a while(1) loop and uses select to detect clients wanting to connect on it's socket. A new thread is then created for the accepted client and its socket descriptor is given as an argument for thread: pthread_create (&thread,NULL, do_something, (void *) &socket_descriptor);
When receiving a message from a client, the server should send this message to all connected clients. (not implemented this yet).
Now I'm wondering how to do this. I absolutely need each accepted connection to be in a thread.
I was thinking of using a select inside the do_something as well; will select detect if data is incoming on the socket descriptor? Or would you do it another way?
edit: added code
my code:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include "tcp_comm.h"
#include <sys/time.h>
#include <sys/types.h>
#define BUFSIZE 1024
#define PORT 1234
void *do_something(void *a);
int main (void){
Socket server = tcp_passive_open( PORT );
MySocket *s = (MySocket *)server;
printf("Server socked_id (main): %i", s->sd);
pthread_t thread;
fd_set active_socketDescriptors,read_socketDescriptors;
FD_ZERO(&active_socketDescriptors);
FD_SET(s->sd,&active_socketDescriptors);
while (1){
read_socketDescriptors = active_socketDescriptors;
if (select (FD_SETSIZE, &read_socketDescriptors, NULL, NULL, NULL) < 0){
perror ("select");
exit (EXIT_FAILURE);
}
int i;
for (i = 0; i < FD_SETSIZE; ++i){
if (FD_ISSET (i, &read_socketDescriptors)){
if (i == s->sd){
Socket client = tcp_wait_for_connection( server );
pthread_create (&thread,NULL, do_something, (void *)client);
FD_SET (s->sd, &active_socketDescriptors);
} else {
}
}
}
}
tcp_close( server );
return 0;
}
void *do_something(void *client){
unsigned char input[BUFFER_SIZE];
pthread_detach(pthread_self());
MySocket *s = (MySocket *)client;
printf("Client socked_id (thread): %i", s->sd);
int j;
while (1){
int nbytes = tcp_receive(client, input, BUFSIZE );
if (nbytes <= 0){
if (nbytes ==0){
/* connection closed by client*/
printf("Client closed connection");
} else {
/* other error*/
perror("tcp_receive");
}
tcp_close(&client);
/*remove the socket descriptor from set in the main BRAINSTORM ABOUT THIS */
} else {
/*data incoming */
printf("\nMessage from client: %s",input);
}
}
return 0;
}
edit 2: reformulation of problem
I have to use threads (it not because of the system; linux) but because it's mandatory in the assignment to have a thread for each client.
The problem i have specifically is that only the main thread can send the data recieved in each thread from each client to all clients because only the main thread has access to the set which contains the socket descriptors.
edit3: what I need to add in each thread but I can't because of the s.thread and s.main being in different places & the thread not knowing the set of the main.
for (j=0; j<=FD_SETSIZE;j++){
if(FD_ISSET(j,&active_socketDescriptors)){
if (j != s.thead && j!=s.main){
tcp_send(j, (void*)input,nbytes);
}
}
}
edit 4: I solved it this way:
i have a dynamic array list where i put a list of connected clients with there socket descriptor. Inside the thread of the server (do something) I have the recieve blocking until it gets input then this input is send to all connected clients using there socket descriptor from the list which it loops trough. Inside the clients there is a thread listening and a thread sending.
If the client connection sockets are non-blocking, then using e.g. select to wait for the socket receive data is a possible way. However, since you already have the connected sockets in threads, you can keep them blocking, and just do a read call on them. The call to read will block until you receive data, which can then be spread to the other threads.
Edit
After better understanding your requirements, you should probably have the sockets non-blocking, and use a loop with select with a short timeout. When select timeouts (i.e. returns 0) then you check if there is data to send. If there is, then send the data, and go back to the select call.
Given your description it might be worth rethinking the architecture of your application. (Unless this has been dictated by limitations on your system). Let me explain this a little more...
By your description, if I understood you correctly, after a client has connected to the server any messages it (the client) sends will be relayed (by the server) to all other clients. So, rather than creating a new thread why not simply add the newly connected socket to the FDSET of the select. Then when a message comes in you can simply relay to the others.
If you expect a large number of clients for a single server you should see if the poll system call is available on your system (it's just like select but supports monitoring more clients). A good poll/select version ought to out-perform your threaded version.
If you really want to continue with your threaded version here's one way to accomplish what you are trying to do. When you create the thread for each accepted client you also create a pipe back to the server thread (and you add this to the server select/poll set.) and pass that to the client thread. So your server thread now not only receives new connections but relays the messages too.
Although you said that you absolutely must deal with each client in a separate thread, unless you are using a real time operating system, you will probably find that the thread context-switch/synchronization you need to do will soon dominate over the multiplexing overhead of the first solution I suggested. (But since you did not mention an OS I am guessing!)
This is related to your design.
If you only need to do one or two features for each connected client, then suggest you to use only one thread to implement your server.
If you has to do lots of features for each connected client, then multiple thread design is okay.
However, the question you asked should be how did I passing the data from receiving thread to all others. The suggested answer from me is ether:
a) use message queue to passing inter thread data: each thread has one message queue and each thread will listen to its own socket and this message queue. When receiving data from socket, the thread sending the data to all other message queues
b) use an single global buffer: if has any incoming data form socket, put this data into this global buffer and adding a tag to this data indicating where this data comes from.
my 2 cents.
I am trying to make a simple client-server chat program. On the client side I spin off another thread to read any incomming data from the server. The problem is, I want to gracefully terminate that second thread when a person logs out from the main thread. I was trying to use a shared variable 'running' to terminate, problem is, the socket read() command is a blocking command, so if I do while(running == 1), the server has to send something before the read returns and the while condition can be checked again. I am looking for a method (with common unix sockets only) to do a non-blocking read, basically some form of peek() would work, for I can continually check the loop to see if I'm done.
The reading thread loop is below, right now it does not have any mutex's for the shared variables, but I plan to add that later don't worry! ;)
void *serverlisten(void *vargp)
{
while(running == 1)
{
read(socket, readbuffer, sizeof(readbuffer));
printf("CLIENT RECIEVED: %s\n", readbuffer);
}
pthread_exit(NULL);
}
You can make socket not blockable, as suggested in another post plus use select to wait input with timeout, like this:
fd_set input;
FD_ZERO(&input);
FD_SET(sd, &input);
struct timeval timeout;
timeout.tv_sec = sec;
timeout.tv_usec = msec * 1000;
int n = select(sd + 1, &input, NULL, NULL, &timeout);
if (n == -1) {
//something wrong
} else if (n == 0)
continue;//timeout
if (!FD_ISSET(sd, &input))
;//again something wrong
//here we can call not blockable read
fcntl(socket, F_SETFL, O_NONBLOCK);
or, if you have other flags:
int x;
x=fcntl(socket ,F_GETFL, 0);
fcntl(socket, F_SETFL, x | O_NONBLOCK);
then check the return value of read to see whether there was data available.
note: a bit of googling will yield you lots of full examples.
You can also use blocking sockets, and "peek" with select with a timeout. It seems more appropriate here so you don't do busy wait.
The best thing is likely to get rid of the extra thread and use select() or poll() to handle everything in one thread.
If you want to keep the thread, one thing you can do is call shutdown() on the socket with SHUT_RDWR, which will shut down the connection, wake up all threads blocked on it but keep the file descriptor valid. After you have joined the reader thread, you can then close the socket. Note that this only works on sockets, not on other types of file descriptor.
Look for function setsockopt with option SO_RCVTIMEO.