I'm actually having troubles with a client-client application. Everything, in this question, is related to a Unix network programming environment.
This is my situation:
I have a client (called C1 from now on) that calls a listen() on a socket.
C1 puts the listenfd socket associated with the previous listen() call in an appropriate fd_set variable and calls a select() on it.
Once it receives a new incoming connection from another client (called C2 from now on), the select() procs, the connection is successfully created with accept() and the clients C1-C2 start communicating.
Let's call the accept() returned int accfd and the connect()returned int connfd.
Once they are done, both C1-C2 close the relative sockets with close(connfd),close (accfd).
Once done, both clients have the opportunity whether to send/receive data again or not. If C1 choose to restart its send/receive routine, the fd_set is zeroed (using the FD_ZERO() macro) and the listenfd is put again in the fd_set variable associated with the previously calledselect(). The thing is, if C2 tries to establish a new connection with C1, the second connect() doesn't make theselect() proc in C1,even if the connect() call made by C1 succeeds. This doesn't happen, if a third client (C3) tries to connect() to C1.
What I'm trying to understand, is how can I close a connection with a client and open a new connection with the same client at a different time.
Note that I don't want the clients to keep the firstly created connection after their send/receive routine is done. I want to create a new connection with both clients.
Here's the client code, note that I omitted obvious or useless parts of the code:
int nwrite,nread,servsock,listenfd,clsock[10],mastfd,maxfd,s=0,t=0,i=0,j=0;
int act,count=0;
for (i=0;i<10;i++)
clsock[i]=socket(PF_INET,SOCK_STREAM,0); //clsock is an array of sockets. Each time C2 tries to connect to a generic C1 client, it uses the clsock[count] int. count is incremented everytime a connection is closed.
for (i=0;i<10;i++)
if(setsockopt(clsock[i],SOL_SOCKET,SO_REUSEADDR,(char *)&opt2,sizeof(opt2))<0)
{
perror("setsockopt");
exit(-1);
}
listenfd=socket(PF_INET,SOCK_STREAM,0); //this is the listening socket
if(setsockopt(listenfd,SOL_SOCKET,SO_REUSEADDR,(char *)&opt,sizeof(opt))<0)
{
perror("setsockopt");
exit(-1);
}
if (listenfd<0)
{
perror("Listenfd: ");
exit(-1);
}
if (bind(listenfd,(struct sockaddr*)&cl2addr,sizeof(cl2addr))<0)
{
perror("Binding: ");
exit(-1);
}
if (listen(listenfd,100)<0)
{
perror("Listening: ");
exit(-1);
}
do
{
do
{
FD_ZERO(&readfd);
FD_SET(STDIN_FILENO,&readfd);
FD_SET(listenfd,&readfd); //the listenfd socket is added
FD_SET(servsock,&readfd);
[... Maxfd and the elaps structure are set....]
act=select(maxfd,&readfd,NULL,NULL,&elaps); //maxfd is calculated in another piece of code. I'm sure it is right.
system("reset");
if (FD_ISSET(listenfd,&readfd)) //when the listen procs, the loop ends for C1.
{
[...exits from the first do-while loop...]
}
if (FD_ISSET(STDIN_FILENO,&readfd)) //this is where c2 exits from the loop
{
s=1;
do some things here.
}
[.....some useless code here .....]
}
while (s!=1); //this is the condition used by c1/c2 to exit the loop
if (t==1) //this is what C1 runs, having t=1.
{
if ((mastfd=accept(listenfd,(struct sockaddr *) &cl2addr,&cllen))<0) //C1 accepts a generic connection
{
perror("Accept: ");
exit(-1);
}
[....do some things...]
if (close(mastfd)<0) //Once done, it closes the actually connected socket
{
perror("Error closing mastfd");
_exit(-1);
}
}
else //this is what C2 runs
{
claddr.sin_addr.s_addr=inet_addr(ipbuff); // ipbuff is C1 port
claddr.sin_port=htons(sprt); //sprt is C1 port
if (connect(clsock[count],(struct sockaddr *) &claddr,sizeof(claddr))<0) //create a connection between C1 and C2
{
perror("Connect: ");
printf("ERROR: %s",strerror(errno));
exit(-1);
}
[....do some things...]
if (close(clsock[count])<0)
{
perror("Error closing socket!");
_exit(-1);
}
count++; //increment count to be able to create a new connection and not to re-use the same socket in the clsock[count] array.
}
if (menu==1)
{
memset(&claddr,0,sizeof(claddr)); //this was when my brain was about to pop off
fflush(stdin);
fflush(stdout);
t=0;
s=0;
num_usr=0;
system("reset");
FD_ZERO(&readfd); //this is when my brain totally popped off
FD_CLR(listenfd,&readfd);
FD_CLR(servsock,&readfd);
FD_CLR(STDIN_FILENO,&readfd);
FD_SET(listenfd,&readfd);
FD_SET(servsock,&readfd);
FD_SET(STDIN_FILENO,&readfd);
}
} while (menu==1)
Thank you all, if the question isn't well proposed or written, please let me know that. I'm sorry for my inexperience and for my English, I'm just getting started with network programming. Thank you so much, in advance, for your help.
I don't see any code calculating a new maxfd value when preparing readfd for select(). When calling select(), maxfd must be the value of the highest descriptor + 1 of all the provided fd_sets. There is no code shown that calculates maxfd each time readfd is reset. You need to do something like this:
FD_ZERO(&readfd);
FD_SET(STDIN_FILENO, &readfd);
maxfd = STDIN_FILENO;
FD_SET(listenfd, &readfd);
maxfd = max(listenfd, maxfd);
FD_SET(servsock, &readfd);
maxfd = max(servsock, maxfd);
act = select(maxfd+1, &readfd, NULL, NULL, &elaps);
Also keep in mind that on some systems, select() modifies the timeval structure to report the amount of time remaining after select() exits, so you should reset elaps every time you call select().
I solved my issue removing the line at the end of the code
memset(&claddr,0,sizeof(claddr));
Even though it solved my problem, I don't really know why this wasn't making the code work. If someone could explain that, it would be great.
Related
I'm busy with this for 2 days now and still don't understand it. What does select() do in this code?
I know that if there is an incoming connection that can be accepted, the copy.fd_array[] will contain ListenSocket but when the while loop repeats it's still there. So how do we know if a client is disconnected? What does fd_set copy contain after the select() call?
fd_set master;
FD_ZERO(&master);
FD_SET(ListenSocket, &master);
while (1)
{
fd_set copy = master;
select(FD_SETSIZE, ©, NULL, NULL, NULL);
for (int i = 0; i < FD_SETSIZE; i++)
{
// If new connection
if (FD_ISSET(ListenSocket, ©))
{
printf("[+] New connection\n");
// Accept connection
SOCKET AcceptedClient = accept(ListenSocket, NULL, NULL);
FD_SET(AcceptedClient, &master);
// Send welcome message to client
char buff[128] = "Hello Client!";
send(AcceptedClient, buff, sizeof(buff), 0);
}
}
}
I'm busy with this for 2 days now and still don't understand it.
It's no wonder that you don't understand the code: The code in the example is nonsense.
Checking the ListenSocket should be done outside the for loop. And FD_ISSET must also be checked for the connections accepted using accept.
The correct code inside the while loop would look like this:
fd_set copy = master;
select(FD_SETSIZE, ©, NULL, NULL, NULL);
// If new connection
if (FD_ISSET(ListenSocket, ©))
{
...
}
for (int i = 0; i < FD_SETSIZE; i++)
{
// If an existing connection has data
// or the connection has been closed
if ((i != ListenSocket) && FD_ISSET(i, ©))
{
nBytes = recv(i, buffer, maxBytes, 0);
// Connection dropped
if(nBytes < 1)
{
close(i); // other OSs (Linux, MacOS ...)
// closesocket(i); // Windows
FD_CLR(i, &master);
}
// Data received
else
{
...
}
}
}
I know that if there is an incoming connection that can be accepted, the copy.fd_array[] will contain ListenSocket but when the while loop repeats it's still there.
What does fd_set copy contain after the select() call?
First of all: Before calling select(), copy.fd_array[] must contain all socket handles that you are interested in. This means it must contain ListenSocket and all handles returned by accept().
master.fd_array[] contains all these handles, so fd_set copy = master; will ensure that copy.fd_array[] also contains all these handles.
select() (with NULL as last argument) will wait until at least one socket becomes "available". This means that it will wait until at least one of the following conditions is true:
A connection accepted using accept() is closed by the other side
a connection accepted using accept() has data that can be received
there is a new connection that can be accepted using accept(ListenSocket...)
As soon as one condition is fulfilled, select() removes all other handles from copy.fd_array[]:
ListenSocket is removed from copy.fd_array[] if there is no incoming connection
A handle returned by accept() is removed from that array if the connection has neither been closed nor new data is available
If two events happen the same time, copy.fd_array[] will contain more than one handle.
You use FD_ISSET() to check if some handle is still in the array.
So how do we know if a client is disconnected?
When you detect FD_ISSET(i, ©) for a value i that has been returned by accept(), you must call recv() (under Linux read() would also work):
If recv() returns 0 (or negative in the case of errors), the other computer has dropped the connection. You must call close() (closesocket() on Windows) and remove the handle from copy.fd_array[] (this means: you must remove it from master.fd_array[] because of the line fd_set copy = master;).
If recv() returns a positive value, this is the number of bytes that have been received.
I am facing one of the strangest programming problems in my life.
I've built a few servers in the past and the clients would connect normally, without any problems.
Now I'm creating one which is basically a web server. However, I'm facing a VERY strange situation (at least to me).
Suppose that you connect to localhost:8080 and that accept() accepts your connection and then the code will process your request in a separate thread (the idea is to have multiple forks and threads across each child - that's implemented on another file temporarily but I'm facing this issue on that setup as well so...better make it simple first). So your request gets processed but then after being processed and the socket being closed AND you see the output on your browser, accept() accepts a connection again - but no one connects of course because only one connection was created.
errno = 0 (Success) after recv (that's where the program blows up)
recv returns 0 though - so no bytes read (of course, because the connection was not supposed to exist)
int main(int argc, char * argv[]){
int sock;
int fd_list[2];
int fork_id;
/* Socket */
sock=create_socket(PORT);
int i, active_n=0;
pthread_t tvec;
char address[BUFFSIZE];
thread_buffer t_buffer;
int msgsock;
conf = read_config("./www.config");
if(conf == NULL)
{
conf = (config*)malloc(sizeof(config));
if(conf == NULL)
{
perror("\nError allocating configuration:");
exit(-1);
}
// Set defaults
sprintf(conf->httpdocs, DOCUMENT_ROOT);
sprintf(conf->cgibin, CGI_ROOT);
}
while(cicle) {
printf("\tWaiting for connections\n");
// Waits for a client
msgsock = wait_connection(sock, address);
printf("\nSocket: %d\n", msgsock);
t_buffer.msg = &address;
t_buffer.sock = msgsock;
t_buffer.conf = conf;
/* Send socket to thread */
if (pthread_create(&tvec, NULL, thread_func, (void*)&t_buffer) != 0)
{
perror("Error creating thread: ");
exit(-1);
}
}
free(conf);
return 0;
}
Here are two important functions used:
int create_socket(int port) {
struct sockaddr_in server, remote;
char buffer[BUFF];
int sock;
sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock < 0) {
perror("opening stream socket");
exit(1);
}
server.sin_family = AF_INET;
server.sin_port = htons(port);
server.sin_addr.s_addr = htonl(INADDR_ANY);
if (bind(sock, (struct sockaddr *) &server, sizeof(struct sockaddr_in))) {
perror("binding stream socket");
exit(1);
}
gethostname(buffer, BUFF);
printf("\n\tServidor a espera de ligações.\n");
printf("\tUse o endereço %s:%d\n\n", buffer,port);
if (listen(sock, MAXPENDING) < 0) {
perror("Impossível criar o socket. O servidor vai sair.\n");
exit(1);
}
return(sock);
}
int wait_connection(int serversock, char *remote_address){
int clientlen;
int clientsock;
struct sockaddr_in echoclient;
clientlen = sizeof(echoclient);
/* Wait for client connection */
if ((clientsock = accept(serversock, (struct sockaddr *) &echoclient, &clientlen)) < 0)
{
perror("Impossivel estabelecer ligacao ao cliente. O servidor vai sair.\n");
exit(-1);
}
printf("\n11111111111111Received request - %d\n", clientsock);
sprintf(remote_address, "%s", inet_ntoa(echoclient.sin_addr));
return clientsock;
}
So basically you'd see:
11111111111111Received request - D
D is different both times so the fd is different definitely.
Twice! One after the other has been processed and then it blows up after recv in the thread function. Some times it takes a bit for the second to be processed and show but it does after a few seconds. Now, this doesn't always happen. Some times it does, some times it doesn't.
It's so weird...
I've rolled out the possibility of being an addon causing it to reconnect or something because Apache's ab tool causes the same issue after a few requests.
I'd like to note that even if I Don't run a thread for the client and simply close the socket, it happens as well! I've considered the possibility of the headers not being fully read and therefore the browsers sends another request. But the browser receives the data back properly otherwise it wouldn't show the result fine and if it shows the result fine, the connection must have been closed well - otherwise a connection reset should appear.
Any tips? I appreciate your help.
EDIT:
If I take out the start thread part of the code, sometimes the connection is accepted 4, 5, 6 times...
EDIT 2: Note that I know that the program blows up after recv failing, I exit on purpose.
This is certainly a bug waiting to happen:
pthread_create(&tvec, NULL, thread_func, (void*)&t_buffer
You're passing t_buffer, a local variable, to your thread. The next time you accept a client, which can happen
before another client finished, you'll pass the same variable to that thread too, leading to a lot of very indeterministic behavior.(e.g. 2 threads reading from the same connection, double close() on a descriptor and other oddities. )
Instead of passing the same local variable to every thread, dynamically allocate a new t_buffer for each new client.
Suppose ... after being processed and the socket being closed AND you see the output on your browser, accept() accepts a connection again - but no one connects of course because only one connection was created.
So if no-one connects, there is nothing to accept(), so this never happens.
So whatever you're seeing, that isn't it.
I'm trying to implement a C socket server in Linux using the code from Beej's sockets guide, which is here:
http://beej.us/guide/bgnet/examples/server.c
This works, and I've written a Windows client in C# to communicate with it. Once the client connects, I have it send a byte array to the server, the server reads it, then sends back a byte array. This works.
However, after this, if I have the client try to send another byte array, I get a Windows popup saying "An established connection was aborted by the software in your host machine." Then I have to re-connect with the client again. I want to keep the connection open indefinitely, until the client sends a disconnect command, but despite reading through Beej's guide, I just don't seem to get it. I'm not even trying to implement the disconnect command at present, I'm just trying to keep the connection open until I close the server.
I've tried removing the close() calls in Beej's code:
while(1) { // main accept() loop
sin_size = sizeof their_addr;
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
if (new_fd == -1) {
perror("accept");
continue;
}
inet_ntop(their_addr.ss_family,
get_in_addr((struct sockaddr *)&their_addr),
s, sizeof s);
printf("server: got connection from %s\n", s);
if (!fork()) { // this is the child process
close(sockfd); // child doesn't need the listener
ProcessRequest(new_fd); // this is not Beej's code, I've replaced his code here (which was a simple string send()) with a function call that does a read() call, processes some data, then sends back a byte array to the client using send().
close(new_fd);
exit(0);
}
close(new_fd); // parent doesn't need this
}
But that just gets me an infinite loop of "socket accept: bad file descriptor" (I tried removing both the close(new_fd) lines, together and apart, and the close(sockfd) as well.
Can anyone more versed with C socket programming give me a hint where I should be looking? Thank you.
The reason for the accept() problem is that sockfd isn't valid. You must have closed it somewhere. NB if you get such an error you shouldn't just keep retrying as though it hadn't happened.
The reason for the client problem is that you're only processing one request in ProcessRequest(), as its name suggests, and as you describe in your comment. Use a loop, reading requests until recv() returns zero or an error occurs.
Cause
The reason client faces error is because of close(new_fd) either by the server-parent or server-child.
Solution
At any point of time, a server may get two kind of events:
Connection request from a new client
Data from an existing client
The server have to honor both of them. There are two (major) ways to handle this.
Solution Approach 1
Design the server as a concurrent server. In Beej's guide it is
7.2. select()—Synchronous I/O Multiplexing
http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html#select
Since OP's approach is not this one, we do not explore it further.
Solution Approach 2
At server, fork() a process per client. This is the approach OP has taken and we explore here. Essentially, it is fine tuning the ProcessRequest() function in OP's code. Here is a sketch.
void ProcessRequest( int new_fd ) {
char buffer[ N ];
for( ; ; ) { // infinite loop until client disconnects or some error
int const recvLen = recv( new_fd, buffer, sizeof buffer, 0 );
if( recvLen == 0 ) { break; } // client disconnected
else if( recvLen == -1 ) { perror( "recv" ); break; }
int const sendLen = send( new_fd, buffer, recvLen, 0 );
if( sendLen == -1 ) { perror( "send" ); break; }
// TODO if( sendLen < recvLen ) then send() in loop
}
}
Note
I am sorry for having the half-baked solution four few hours. While I was editing the answer, I lost connectivity to stackoverflow.com which lasted for couple of hours.
This is a question about socket programming for multi-client.
While I was thinking how to make my single client and server program
to multi clients,I encountered how to implement this.
But even if I was searching for everything, kind of confusion exists.
I was thinking to implement with select(), because it is less heavy than fork.
but I have much global variables not to be shared, so I hadn`t considered thread to use.
and so to use select(), I could have the general knowledge about FD_functions to utilize, but here I have my question, because generally in the examples on websites, it only shows multi-client server program...
Since I use sequential recv() and send() in client and also in server program
that work really well when it`s single client and server, but
I have no idea about how it must be changed for multi cilent.
Does the client also must be unblocking?
What are all requirements for select()?
The things I did on my server program to be multi-client
1) I set my socket option for reuse address, with SO_REUSEADDR
2) and set my server as non-blocking mode with O_NONBLOCK using fctl().
3) and put the timeout argument as zero.
and proper use of FD_functions after above.
But when I run my client program one and many more, from the second client,
client program blocks, not getting accepted by server.
I guess the reason is because I put my server program`s main function part
into the 'recv was >0 ' case.
for example with my server code,
(I`m using temp and read as fd_set, and read as master in this case)
int main(void)
{
int conn_sock, listen_sock;
struct sockaddr_in s_addr, c_addr;
int rq, ack;
char path[100];
int pre, change, c;
int conn, page_num, x;
int c_len = sizeof(c_addr);
int fd;
int flags;
int opt = 1;
int nbytes;
fd_set read, temp;
if ((listen_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0)
{
perror("socket error!");
return 1;
}
memset(&s_addr, 0, sizeof(s_addr));
s_addr.sin_family = AF_INET;
s_addr.sin_addr.s_addr = htonl(INADDR_ANY);
s_addr.sin_port = htons(3500);
if (setsockopt(listen_sock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(int)) == -1)
{
perror("Server-setsockopt() error ");
exit(1);
}
flags = fcntl(listen_sock, F_GETFL, 0);
fcntl(listen_sock, F_SETFL, flags | O_NONBLOCK);
//fcntl(listen_sock, F_SETOWN, getpid());
bind(listen_sock, (struct sockaddr*) &s_addr, sizeof(s_addr));
listen(listen_sock, 8);
FD_ZERO(&read);
FD_ZERO(&temp);
FD_SET(listen_sock, &read);
while (1)
{
temp = read;
if (select(FD_SETSIZE, &temp, (fd_set *) 0, (fd_set *) 0,
(struct timeval *) 0) < 1)
{
perror("select error:");
exit(1);
}
for (fd = 0; fd < FD_SETSIZE; fd++)
{
//CHECK all file descriptors
if (FD_ISSET(fd, &temp))
{
if (fd == listen_sock)
{
conn_sock = accept(listen_sock, (struct sockaddr *) &c_addr, &c_len);
FD_SET(conn_sock, &read);
printf("new client got session: %d\n", conn_sock);
}
else
{
nbytes = recv(fd, &conn, 4, 0);
if (nbytes <= 0)
{
close(fd);
FD_CLR(fd, &read);
}
else
{
if (conn == Session_Rq)
{
ack = Session_Ack;
send(fd, &ack, sizeof(ack), 0);
root_setting();
c = 0;
while (1)
{
c++;
printf("in while loop\n");
recv(fd, &page_num, 4, 0);
if (c > 1)
{
change = compare_with_pre_page(pre, page_num);
if (change == 1)
{
page_stack[stack_count] = page_num;
stack_count++;
}
else
{
printf("same as before page\n");
}
} //end of if
else if (c == 1)
{
page_stack[stack_count] = page_num;
stack_count++;
}
printf("stack count:%d\n", stack_count);
printf("in page stack: <");
for (x = 0; x < stack_count; x++)
{
printf(" %d ", page_stack[x]);
}
printf(">\n");
rq_handler(fd);
if (logged_in == 1)
{
printf("You are logged in state now, user: %s\n",
curr_user.ID);
}
else
{
printf("not logged in.\n");
c = 0;
}
pre = page_num;
} //end of while
} //end of if
}
} //end of else
} //end of fd_isset
} //end of for loop
} //end of outermost while
}
if needed for code explanation : What I was about to work of this code was,
to make kind of web pages to implement 'browser' for server.
I wanted to make every client get session for server to get login-page or so.
But the execution result is, as I told above.
Why is that?
the socket in the client program must be non-blocking mode too
to be used with non-blocking Server program to use select()?
Or should I use fork or thread to make multi client and manage with select?
The reason I say this is, after I considered a lot about this problem,
'select()' seems only proper for multi client chatting program... that many
'forked' or 'threaded' clients can pend to, in such as chat room.
how do you think?...
Is select also possible or proper thing to use for normal multi-client program?
If there something I missed to let my multi client program work fine,
please give me some knowledge of yours or some requirements for the proper use of select.
I didn`t know multi-client communication was not this much easy before :)
I also considered to use epoll but I think I need to understand first about select well.
Thanks for reading.
Besides the fact you want to go from single-client to multi-client, it's not very clear what's blocking you here.
Are you sure you fully understood how does select is supposed to work ? The manual (man 2 select on Linux) may be helpful, as it provides a simple example. You can also check Wikipedia.
To answer your questions :
First of all, are you sure you need non-blocking mode for your sockets ? Unless you have a good reason to do so, blocking sockets are also fine for multi-client networking.
Usually, there are basically two ways to deal with multi-clients in C: fork, or select. The two aren't really used altogether (or I don't know how :-) ). Models using lightweight threads are essentially asynchronous programming (did I mention it also depends on what you mean by 'asynchronous' ?) and may be a bit overkill for what you seem to do (a good example in C++ is Boost.Asio).
As you probably already know, the main problem when dealing with more than one client is that I/O operations, like a read, are blocking, not letting us know when there's a new client, or when a client has said something.
The fork way is pretty straighforward : the server socket (the one which accepts the connections) is in the main process, and each time it accepts a new client, it forks a whole new process just to monitor this new client : this new process will be dedicated to it. Since there's one process per client, we don't care if i/o operations are blocking or not.
The select way allows us to monitor multiple clients in one same process : it is a multiplexer telling us when something happens on the sockets we give it. The base idea, on the server side, is first to put the server socket on the read_fds FD_SET of the select. Each time select returns, you need to do a special check for it : if the server socket is set in the read_fds set (using FD_ISSET(...)), it means you have a new client connecting : you can then call accept on your server socket to create the connection.
Then you have to put all your clients sockets in the fd_sets you give to select in order to monitor any change on it (e.g., incoming messages).
I'm not really sure of what you don't understand about select, so that's for the big explaination. But long story short, select is a clean and neat way to do single-threaded, synchronous networking, and it can absolutely manage multiple clients at the same time without using any fork or threads. Be aware though that if you absolutely want to deal with non-blocking sockets with select, you have to deal extra error conditions that wouldn't be in a blocking way (the Wikipedia example shows it well as they have to check if errno isn't EWOULDBLOCK). But that's another story.
EDIT : Okay, with a little more code it's easier to know what's wrong.
select's first parameter should be nfds+1, i.e. "the highest-numbered file descriptor in any of the three sets, plus 1" (cf. manual), not FD_SETSIZE, which is the maximum size of an FD_SET. Usually it is the last accept-ed client socket (or the server socket at beginning) who has it.
You shouldn't do the "CHECK all file descriptors" for loop like that. FD_SETSIZE, e.g. on my machine, equal to 1024. That means once select returns, even if you have just one client you would be passing in the loop 1024 times ! You can set fd to 0 (like in the Wikipedia example), but since 0 is stdin, 1 stdout and 2 stderr, unless you're monitoring one of those, you can directly set it to your server socket's fd (since it is probably the first of the monitored sockets, given socket numbers always increase), and iterate until it is equal to "nfds" (the currently highest fd).
Not sure that it is mandatory, but before each call to select, you should clear (with FD_ZERO for example) and re-populate your read fd_set with all the sockets you want to monitor (i.e. your server socket and all your clients sockets). Once again, inspire yourself of the Wikipedia example.
I have a TCP connection. Server just reads data from the client. Now, if the connection is lost, the client will get an error while writing the data to the pipe (broken pipe), but the server still listens on that pipe. Is there any way I can find if the connection is UP or NOT?
You could call getsockopt just like the following:
int error = 0;
socklen_t len = sizeof (error);
int retval = getsockopt (socket_fd, SOL_SOCKET, SO_ERROR, &error, &len);
To test if the socket is up:
if (retval != 0) {
/* there was a problem getting the error code */
fprintf(stderr, "error getting socket error code: %s\n", strerror(retval));
return;
}
if (error != 0) {
/* socket has a non zero error status */
fprintf(stderr, "socket error: %s\n", strerror(error));
}
The only way to reliably detect if a socket is still connected is to periodically try to send data. Its usually more convenient to define an application level 'ping' packet that the clients ignore, but if the protocol is already specced out without such a capability you should be able to configure tcp sockets to do this by setting the SO_KEEPALIVE socket option. I've linked to the winsock documentation, but the same functionality should be available on all BSD-like socket stacks.
TCP keepalive socket option (SO_KEEPALIVE) would help in this scenario and close server socket in case of connection loss.
There is an easy way to check socket connection state via poll call. First, you need to poll socket, whether it has POLLIN event.
If socket is not closed and there is data to read then read will return more than zero.
If there is no new data on socket, then POLLIN will be set to 0 in revents
If socket is closed then POLLIN flag will be set to one and read will return 0.
Here is small code snippet:
int client_socket_1, client_socket_2;
if ((client_socket_1 = accept(listen_socket, NULL, NULL)) < 0)
{
perror("Unable to accept s1");
abort();
}
if ((client_socket_2 = accept(listen_socket, NULL, NULL)) < 0)
{
perror("Unable to accept s2");
abort();
}
pollfd pfd[]={{client_socket_1,POLLIN,0},{client_socket_2,POLLIN,0}};
char sock_buf[1024];
while (true)
{
poll(pfd,2,5);
if (pfd[0].revents & POLLIN)
{
int sock_readden = read(client_socket_1, sock_buf, sizeof(sock_buf));
if (sock_readden == 0)
break;
if (sock_readden > 0)
write(client_socket_2, sock_buf, sock_readden);
}
if (pfd[1].revents & POLLIN)
{
int sock_readden = read(client_socket_2, sock_buf, sizeof(sock_buf));
if (sock_readden == 0)
break;
if (sock_readden > 0)
write(client_socket_1, sock_buf, sock_readden);
}
}
Very simple, as pictured in the recv.
To check that you will want to read 1 byte from the socket with MSG_PEEK and MSG_DONT_WAIT. This will not dequeue data (PEEK) and the operation is nonblocking (DONT_WAIT)
while (recv(client->socket,NULL,1, MSG_PEEK | MSG_DONTWAIT) != 0) {
sleep(rand() % 2); // Sleep for a bit to avoid spam
fflush(stdin);
printf("I am alive: %d\n", socket);
}
// When the client has disconnected, this line will execute
printf("Client %d went away :(\n", client->socket);
Found the example here.
I had a similar problem. I wanted to know whether the server is connected to client or the client is connected to server. In such circumstances the return value of the recv function can come in handy. If the socket is not connected it will return 0 bytes. Thus using this I broke the loop and did not have to use any extra threads of functions. You might also use this same if experts feel this is the correct method.
get sock opt may be somewhat useful, however, another way would to have a signal handler installed for SIGPIPE. Basically whenever you the socket connection breaks, the kernel will send a SIGPIPE signal to the process and then you can do the needful. But this still does not provide the solution for knowing the status of the connection. hope this helps.
You should try to use: getpeername function.
now when the connection is down you will get in errno:
ENOTCONN - The socket is not connected.
which means for you DOWN.
else (if no other failures) there the return code will 0 --> which means UP.
resources:
man page: http://man7.org/linux/man-pages/man2/getpeername.2.html
On Windows you can query the precise state of any port on any network-adapter using:
GetExtendedTcpTable
You can filter it to only those related to your process, etc and do as you wish periodically monitoring as needed. This is "an alternative" approach.
You could also duplicate the socket handle and set up an IOCP/Overlapped i/o wait on the socket and monitor it that way as well.
#include <sys/socket.h>
#include <poll.h>
...
int client = accept(sock_fd, (struct sockaddr*)&address, (socklen_t*)&addrlen);
pollfd pfd = {client, POLLERR, 0}; // monitor errors occurring on client fd
...
while(true)
{
...
if(not check_connection(pfd, 5))
{
close(client);
close(sock[1]);
if(reconnect(HOST, PORT, reconnect_function))
printf("Reconnected.\n");
pfd = {client, POLLERR, 0};
}
...
}
...
bool check_connection(pollfd &pfd, int poll_timeout)
{
poll(&pfd, 1, poll_timeout);
return not (pfd.revents & POLLERR);
}
you can use SS_ISCONNECTED macro in getsockopt() function.
SS_ISCONNECTED is define in socketvar.h.
For BSD sockets I'd check out Beej's guide. When recv returns 0 you know the other side disconnected.
Now you might actually be asking, what is the easiest way to detect the other side disconnecting? One way of doing it is to have a thread always doing a recv. That thread will be able to instantly tell when the client disconnects.