I have a program which works as a simple tcp server, it will accept and serve as much as 4 clients. I wrote the following code snippet to archive this goal, for more clearly, I replace some MACRO with "magic number".
while(1) {
int ret = 0;
///
// sock_ is the listening socket, rset_conn_ only contains sock_
//
if(select(sock_ + 1, &rset_conn_, NULL, NULL, &timeo_) <= 0) {
ret = 0;
}
if(FD_ISSET(sock_, &rset_conn_)) {
ret = 1;
}
///
// because the rule select() clears file descriptor and updates timeout is
// complex, for simplicity, let's reset the sock_ and timeout
//
FD_SET(sock_, &rset_conn_);
timeo_.tv_sec = 0;
timeo_.tv_usec = 500000;
if(! ret) {
continue ;
}
///
// csocks_ is an 4 integer array, used for storing clients' socket
//
for(int i = 0 ; i < 4 ; ++i) {
if((-1 == csocks_[i]) && (-1 != (csocks_[i] = accept(sock_, NULL, NULL)))) {
break ;
}
}
}
now, when less than 4 clients connect, this code snippet works normally, but when the 5th client connects, as you can see, it will not be accepted in the for loop, this results in such a problem: in the following loops of while, select() call keeps returning with sock_ bit in rset_conn_ set --- just because I don't accept it!!
I realize that I should not just ignoring it(not accept) in the for loop, but how can I handle it properly by telling kernel "I don't want to accept this client, so not tell me this client want to connect me any more!"?
Any help will be appreciated..
Update:
I can not stop calling select() call, because the clients already connected may disconnect, and then my program will have room for new clients. I need to keep listening on a socket to accept connection, not stop listening when reaches one point.
Why are you asking a question you don't want the answer to? Stop calling select on the listening socket if you don't want to be told that there's an unaccepted connection.
I can not stop calling select() call, because the clients already connected may disconnect, and then my program will have room for new clients. I need to keep listening on a socket to accept connection, not stop listening when reaches one point.
You can stop calling select now. When a client disconnects, then start calling select again. Only ask questions you want the answers to.
There is no socket API to tell the kernel to ignore pending connections (well, on Windows, there is a callback in WSAAccept() for that, but that is a Microsoft-specific extension).
On most platforms, you have to either:
close() the listening socket when your array fills up, then reopen it when a client disconnects.
accept() the client but immediately close() it if the array is full.
stop calling select() and accept() while your array is full. Any clients that are in the backlog will time out eventually, unless your array fills up and you accept() a pending client before it times out.
You can remove the listening socket from the readFDs while there are four existing clients, and put it back when one drops off, but I don't see the point. Why only four clients? Why not remove the limit and eliminate the whole problem? Real servers are not built this way.
Related
I am working on a TCP client server application with threads where clients take turns messaging the server(similar to a turn based game).
The server sends a message to the client but the client has n seconds to respond. If the client doesn't respond in time the server should go to the next client without closing the connection. Is there a way to skip the read() call from the server?
It's not exactly clear what you mean about the server skipping read() calls, but I think you're talking about implementing the response timeout you describe. A common way to do that would be to engage the select() function, which allows you to wait for up to a specified amount of time for a file descriptor to become ready. You would then choose whether to read() or to move on to the next client based on whether select() signals that the wanted file descriptor is ready.
Very roughly, that might be something along these lines:
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>
#include <assert.h>
// ...
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(client_fd, &read_fds);
struct timeval timeout = { .tv_sec = 5, .tv_usec = 0 };
int num_ready = select(client_fd + 1, &read_fds, NULL, NULL, &timeout);
if (num_ready == -1) {
// handle error ...
} else if (num_ready == 0) {
// handle timeout ...
} else if (num_ready == 1) {
// handle client response ...
} else {
// should not happen
assert(0);
}
You could also consider alternatives to select() with similar behavior, such as poll() and epoll_wait(). Also, you will probably find it advantageous in connection with this to configure the client sockets for non-blocking I/O, though this is not a technical requirement for use of the select(), etc. functions.
Do note that it's more complicated than that, however. There are at least these additional considerations:
You will need to be prepared for cases where the client disconnects.
if a client's response arrives too late then you will need to read that response and (presumably) discard it before you can handle any subsequent response from the same client.
Responses might be split into multiple pieces on the wire, so the beginning of a response might arrive within the timeout, yet the end not, and maybe not at all.
For robustness, you'll need to handle cases where the wait is interrupted by a signal prior to time expiring. Presumably you would want in that case to resume waiting, but not restart the timeout.
I need to know whether a client connected/disconnected and handle it.
This was my only idea:
while(!serverStop)
{
fd_set rfds, wfdsBefore, wfdsAfter;
FD_ZERO(&rfds);
FD_SET(serverFd, &rfds);
FD_ZERO(&wfdsBefore);
fillWithClientFds(&wfdsBefore); // clients only listen for messages
wfdsAfter = wfdsBefore;
while(1)
{
select(notimportant, &rfds, &wfdsAfter, NULL, NULL);
if (FD_ISSET(serverFd, &rfds)) // new client appeared
break;
if (doSetsDiffer(&wfdsBefore, &wfdsAfter)) // some client disconnected (doesn't work)
break;
}
// inform connected clients about disconnected ones
}
Not only busy waiting would occur but also this approach doesn't even work (wfdsAfter doesn't change despite the fact that client closed the socket).
Is there any way to do it? The only requirement is to not use multithreading.
serverFd was made with PF_UNIX and SOCK_STREAM flags.
You should place each client file descriptor in the read descriptors (rfds) set after it connects, and, when the file descriptor is subsequently returned as readable, attempt to read from the socket.
First, if your client is really sending nothing (and isn't yet disconnected), its socket will never be marked as readable. That seems like it would solve your issue since you say the client never actually sends anything: it won't be marked readable then until the client disconnects.
But even if the client sends data, the file descriptor would only be marked readable if there were data available OR the client had disconnected. You can easily then distinguish by attempting to read the socket. The return value would be either number of bytes read (if there are data), or zero if the client has disconnected.
(Servers often add the O_NONBLOCK option to sockets to ensure they get notified when the client has data to send but want to ensure they don't block waiting for data from a client. With that option, read still returns 0 when the client has disconnected. With the option, if the client is still around, but there is no data available, the read call would return -1 with errno set to EAGAIN/EWOULDBLOCK.)
One other nuance I haven't explained is that it is possible to close data delivery in one direction while allowing it to continue in the other (see shutdown(2) if you care about this).
You are putting the client sockets in the write descriptor set. You need to put them in the read descriptor set instead.
When a server socket has at least 1 pending client request, it is readable. You can call accept() to accept a client.
When a socket has data in its inbound buffer, or its connected peer has disconnected, it is readable, not writable. You can call read() to differentiate. read() returns > 0 on inbound data, 0 on graceful disconnect, and -1 on error.
A socket is writable when it has available space in its outbound buffer. If write() fails with an EWOULDBLOCK error, the outbound buffer has filled up, and the socket is no longer writable. When the buffer clears up some space, the socket will become writable again.
Also, select() modifies the fdsets you pass to it, so you need to reset rfds on every loop iteration. To avoid that, you can use (e)poll() instead.
So, you need something more like this instead:
fd_set rfds;
while (!serverStop)
{
FD_ZERO(&rfds);
FD_SET(serverFd, &rfds);
fillWithClientFds(&rfds); // clients only listen for messages
if (select(notimportant, &rfds, NULL, NULL, NULL) < 0)
break;
if (FD_ISSET(serverFd, &rfds)) // new client appeared
{
// call accept(), add client to connected list...
}
// clear disconnected list...
for (each client in connected list)
{
if (FD_ISSET(clientFd, &rfds))
{
int nBytes = read(clientFd, ...);
if (nBytes > 0)
{
// handle client data as needed ...
}
else if (nBytes == 0)
{
// add client to disconnected list
}
else
{
// handle error...
// possibly add client to disconnected list...
}
}
}
for (each client in disconnected list)
{
// remove client from connected list...
}
for (each client in disconnected list)
{
// inform connected clients
}
}
I am building a webserver which can accept and handle multiple client connections. I'm using select() for this.
Now while this is going on, if a particular connected socket hasn't had any activity (send or recv) on it, I want to close it. So if no requests come from a connected client for a period of time, I'll close the socket. There are multiple such connected sockets and I need to do this monitoring for each.
I need this functionality to create persistent connections as my webserver has to support HTTP 1.1
What is the best way to do this?
I'd suggest setting the timeout of the call to select to be the minimum time until the next socket will timeout. Then, if it timed out, close the idle sockets, and repeat. Something like this pseudocode:
timeout = default_timeout;
foreach(socket s)
{
timeout = min(timeout, (s.last_send_or_recv_time + IDLE_TIMEOUT - now()));
}
result = select(..., timeout);
if(result == 0)
{
foreach(socket s)
{
if(now() - s.last_send_or_recv_time >= IDLE_TIMEOUT)
{
close(s);
remove_from_socket_list(s);
}
}
}
else
{
// Handle received data, update last_send_or_recv_time, etc.
}
If you're using threads and blocking mode, you just need to set a suitable SO_RCVTIMEOUT on the socket and when you get read() returning -1 with errno == EAGAIN/EWOULDBLOCK, the timeout has happened, so you just close the socket and exit the thread.
If you're using select() it's more complicated. Adam Rosenfield's suggestion looks plausible on the surface, but in practice, if the other sockets are busy enough, the select() timeout might never happen at all, or at least not happen for many minutes after a socket actually became too dead. If you want to enforce the timeout strictly, as you should in a server, you have to associate a last-read time with each socket and, at the bottom of the select() loop, iterate over the fd set picking out those fds that were not ready, check their last-read time, and close the ones that are too dead.
As described in Beej's Guide to network Programming, select() monitors a set of file descriptors for reading (using recv()), a set of file descriptor for writing (using send()) and the last one, I don't know. When the server socket receives message from client sockets, read_fds set will be modified and select() return from blocking status. It is the same for sending message to client sockets. For example:
for(;;) {
read_fds = master; // copy it
if (select(fdmax+1, &read_fds, NULL, NULL, NULL) == -1) {
perror("select");
exit(4);
}
//the rest is code for processing ready socket
I guess the read_fds set will contain the only ready socket descriptor at this point (the others are removed), and the ready socket descriptor is the new connected socket or message sent from connected socket. Is my understanding correct?
It seems the ready socket must be handled one by one. When I tried to run it on gdb to understand the behavior, while the program was processing the ready socket (the code after select() return), I tried to send some message and connect to the server by some new clients. How can it recognize the new clients or newly sent message, even if select() is not called?
As described in Beej's Guide to network Programming, select() monitors a set of file descriptors for reading (using recv()), a set of file descriptor for writing (using send())
Yes
and the last one, I don't know.
The last one no longer has any useful meaning.
I guess the read_fds set will contain the only ready socket descriptor at this point (the others are removed), and the ready socket descriptor is the new connected socket or message sent from connected socket. Is my understanding correct?
That's correct.
It seems the ready socket must be handled one by one. When I tried to run it on gdb to understand the behavior, while the program was processing the ready socket (the code after select() return), I tried to send some message and connect to the server by some new clients. How can it recognize the new clients or newly sent message, even if select() is not called?
Normally when you create a polling loop such as this, you'd add new sockets to the loop. That is, you'd add them to the appropriate fd_sets before your next call to select.
When the new socket becomes writable, you'd send on it.
When you're dealing with multiple sockets that may potentially block (in your case reading sockets), you need to determine which sockets have data in them waiting to be read. You can do this by calling select() and adding the sockets to your read_set.
For your listening socket, if you call accept() and there is no pending connection, then your accept will block until a new connection arrives. So you also want to select() this socket. Once you accept that client, you will want to add that to your read_set.
e.g. Pseudo-code
for (;;) {
struct timeval tv = { timeout, 0 };
fd_set read_set;
FD_ZERO(&read_set);
FD_SET(listen_sock, &read_set);
max_fd = max(max_fd, listen_sock);
/* add all your other other client sockets to thread read_set */
n = select(max_fd, &read_set, NULL, NULL, tv);
if (n > 0) {
if (FD_ISSET(listen_sock, &read_set)) {
cli = accept(listen_sock);
/* add to list of clients */
}
else {
for (int i = 0; i < max_clients; i++) {
if (FD_ISSET(clients[i], &read_set)) {
/* data is waiting. recv */
bytes = recv(clients[i], ..)
if (bytes <= 0) {
/* error or EOF, remove client list, so we don't select on this anymore */
}
}
}
}
}
Note that sends can also block, if the other end is not actively reading, and the send buffer is full. So if you're sending, you might want to check if it's "sendable".
I'm currently working on a project which involves multiple clients connected to a server and waiting for data. I'm using select and monitoring the connection for incoming data. However, the client just continues to print nothing, acting as if select has discovered incoming data. Perhaps I'm attacking this wrong?
For the first piece of data the server does send, it is displayed correctly. However, the server then disconnects and the client continues to spew blank lines.
FD_ZERO(&readnet);
FD_SET(sockfd, &readnet);
while(1){
rv = select(socketdescrip, &readnet, NULL, NULL, &timeout);
if (rv == -1) {
perror("select"); // error occurred in select()
} else if (rv == 0) {
printf("Connection timeout! No data after 10 seconds.\n");
} else {
// one or both of the descriptors have data
if (FD_ISSET(sockfd, &readnet)) {
numbytes = recv(sockfd, buf, sizeof buf, 0);
printf("Data Received\n");
buf[numbytes] = '\0';
printf("client: received '%s'\n",buf);
sleep(10);
}
}
}
I think you need to check the result of recv. If it returns zero, I believe it means the server has closed the socket.
Also (depending on the implementation), you may need to pass socketdescrip+1 to select.
If I remember correctly, you need to initialise set of fds before each call to select() because select() corrupts it.
So move FD_ZERO() and FD_SET() inside the loop, just before select().
acting as if select has discovered
incoming data. Perhaps I'm attacking
this wrong?
In addition to what was said before, I'd like to note that select()/poll() do tell you not when "data are there" but rather that next corresponding system call will not block. That's it. As was said above, recv() doesn't block and properly returns 0, what means EOF, connection was closed by the other side.
Though on most *nix systems in the case only first call of recv() would return 0, following calls would return -1. When using async I/O rigorous error checking is a must!
And personally I would strongly suggest to use poll() instead. Unlike select(), it doesn't destroy its arguments and works fine with high numbered socket descriptors.
When server closes the connection, it will send a packet taking FIN flag to client side to announce that it no longer sends data. The packet is processed by TCP/IP stack at the client side and has no data for application level. The application level is notified to trigger select because something happened on the monitored file descriptor, and recv() return 0 bytes because no data sent by server.
Is this true when talking about your code?
select(highest_file_descriptor+1, &readnet, NULL, NULL, &timeout);
In your simple example (with FD_ZERO and FD_SET moved inside the while(1) loop as qrdl said) it should look like this:
select(sockfd+1, &readnet, NULL, NULL, &timeout);
Also - please note that when recv returns 0 bytes read it means that connection was closed - no more data! Your code is also buggy - when something bad happens on recv (it returns <0 when this happens) you will have serious trouble because something like buf[-1] may lead to unpredictable results. Please handle this case properly.
While I respect the fact that you try to use the low-level BSD sockets API I must say that I find it awfully inefficient. That's why I recommend to you if possible to use ACE which is a very efficient and productive framework which has a lot of things already implemented when it comes to network programming (ACE_Reactor for example is something that makes it easier to do what you're trying to achieve here).