I want to open a port and wait for incoming connections, however I can't get select() to work. I had it working with poll() but I need select() for portability. What am I doing wrong?
Code for waiting for the connection looks like this (I need to check for interruptions every 200ms):
/* Wait for a descriptor */
int wait_for_fd(int fd){
int waitms = 200;
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = waitms * 1000;
fd_set rfds;
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
int active = 0;
while(active == 0){
active = select(fd+1, &rfds, NULL, NULL, &tv);
bail_for(active < 0, "select()");
if(pending_interrupt())
break;
}
return active;
}
And then my code to actually open a port and wait for a connection:
int open_port(int port){
// define server socket
struct sockaddr_in serv_addr;
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(port);
//creates the listening socket
int listenfd = socket(AF_INET, SOCK_STREAM, 0);
bail_for(listenfd < 0, "socket()");
bail_for(bind(listenfd, (struct sockaddr*)&serv_addr, sizeof(serv_addr)) < 0, "bind()");
bail_for(listen(listenfd, 10) < 0, "listen()");
//each accept() is a new incoming connection
printf("Waiting for connetion on port %d...\n", port);
wait_for_fd(listenfd);
int connfd = accept(listenfd, NULL, NULL);
bail_for(connfd < 0, "accept()");
printf("Incoming connection!\n");
//do not allow additional client connetions
close(listenfd);
return connfd;
}
However wait_for_fd() never returns (due to select always returning 0) even when a client is connecting.
This must be on every iteration:
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
Because rfds is an in/out parameter for select(). It actually tells with it which fds were affected.
According to manpage of select
On exit, the sets are modified in place to indicate which file descriptors actually changed status. Each of the three file descriptor sets may be specified as NULL if no file descriptors are to be watched for the corresponding class of events.
This means, that, when you have called select and no file-descriptors have changed, no file-descriptors are set in rfds. Therefor you'll have to set them on each iteration
while(active == 0){
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
active = select(fd+1, &rfds, NULL, NULL, &tv);
bail_for(active < 0, "select()");
if(pending_interrupt())
break;
}
Related
I would like to manage the sockets so that I can only accept connections once and then listen to the file descriptor of each socket and intercept the messages sent.
After a lot of trouble, I don't understand why my code systematically goes into the FD_ISSET() function while I put the file descriptor in my fd_set.
From what I've seen on other topics, it seems that you have to accept the connection systematically? Except that if I do that my code goes into an infinite wait.
int main(int argc, char **argv) {
int socket = 0;
// My client 1 socket
int pOneSocket = 0;
// My client 2 socket
int pTwoSocket = 0;
struct sockaddr_in serv_addr;
int test = 0;
// Create the server socket to listen
createSocket(&socket,1337, &serv_addr);
fd_set readfds;
FD_ZERO(&readfds);
//Enter the server socket to readfs
FD_SET(socket, &readfds);
while (1) {
// Wait for changes on the server socket
test = select(1024, &readfds, NULL, NULL, NULL);
// If the socket fd is not on the readfs, then accept a new connecion
if (FD_ISSET(test, &readfds) == 0)
accept_connection(pOneSocket ? &pTwoSocket : &pOneSocket,
&socket, &serv_addr, &readfds);
// Handle the connection and retreive command from file descriptor
handleConnection(&test, &pOneSocket);
}
return 0;
}
On my handleConnection, I have all the code related to retreive commands:
void accept_connection(int *new_socket, int *socket, struct sockaddr_in *address, fd_set *readfds) {
int addrlen = sizeof(*address);
if((*new_socket = accept(*socket, (struct sockaddr *) address, (socklen_t *) &addrlen)) < 0) {
fprintf(stderr, "\n Accept failed \n");
exit(84);
}
FD_SET(*new_socket, readfds);
}
Currently, my code works to accept a connection, but gets stuck on the first command sent from the client.
select() modifies the fd_set object(s) you pass to it, so each loop iteration needs to make a copy of them, not pass the original fd_sets you're using to track your connections. Something like this:
FD_SET(socket, &readfds);
while (1) {
// Wait for changes on the sockets we want to read
fd_set ready = readfds;
test = select(1024, &ready, NULL, NULL, NULL);
// If the socket fd is ready, then accept a new connecion
if (FD_ISSET(socket, &ready))
accept_connection(pOneSocket ? &pTwoSocket : &pOneSocket,
&socket, &serv_addr, &readfds);
... check the other fds that have previously been accepted to see if they're ready
A service-side UDP socket was created without the O_NONBLOCK flag, then in a while loop, the select() call returns (no error) and the socket fd is tested true from FD_ISSET. However, subsequently when I read from the socket using recvmsg(), the call blocks.
The simplified code is as follows:
int fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
struct sockaddr_in sock;
sock.sin_family = AF_INET;
sock.sin_addr.s_addr = <some IP>;
sock.sin_port = htons(<some port number>);
int rc = bind(fd, (struct sockaddr *)&sock, sizeof(sock));
// no error
while (1) {
fd_set rset; // read
FD_ZERO(&rset);
FD_SET(fd, rset);
rc = select(fd + 1, &rset, NULL, NULL, NULL); // no timeout
if (rc <= 0) {
// handles error or zero fd
continue;
}
if (FD_ISSET(fd, rset)) {
struct msghdr msg;
// set up msg ...
ret = recvmsg(fd, &msg, 0); // <------- blocks here
// check ret
}
}
What are some of the conditions that the UDP socket is readable but reading it would block?
I want to take a message from one client and send it to the other with a server in middle.
I use the select function to make several connections to the server possible but here is the problem:
I store the socket descriptor sock in Queue[pq] using dup and it's working. the sock = 4 & Queue[0] = 5. now I need to take the other client's socket descriptor, and again I store it in Queue. the values of Queue for Client(2) equals to Queue[0] = 5 , Queue[1] = 7 but for the first client it's still Queue[0] = 5 and Queue[1] = 0 it means the code for first client doesn't have access to the second client's socket descriptor and i cant forward it to the other client using his socket descriptor with send() and client 3 has the the first 2 client's socket descriptor but the first 2 client doesn't have the third one's socket descriptor.
I think that's because select() use different memory address for the values for each client just like sock which has different values for each connection.
how I can solve this problem? how the clients can access each other's socket descriptor?
the code:
Global Values:
int tcpfd, udpfd, Sock, nready, maxfdp1;
int Queue[64];
int pq = -1;
int max(int x, int y) {
if (x > y)
return x;
else
return y;
}
int main(){
pid_t childpid;
fd_set rset;
ssize_t n;
socklen_t len;
const int on = 1;
struct sockaddr_in cliaddr, servaddr;
void sig_chld(int);
/* create listening TCP socket */
tcpfd = socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = inet_addr("192.168.1.5");
servaddr.sin_port = htons(PORT);
// binding server addr structure to tcpfd
bind(tcpfd, (struct sockaddr*)&servaddr, sizeof(servaddr));
listen(tcpfd, 10);
// clear the descriptor set
// get maxfd
maxfdp1 = max(tcpfd, udpfd) + 1;
for (;;) {
// set tcpfd and udpfd in readset
FD_ZERO(&rset);
FD_SET(tcpfd, &rset);
// select the ready descriptor
nready = select(maxfdp1, &rset, NULL, NULL, NULL);
// if tcp socket is readable then handle
// it by accepting the connection
if (FD_ISSET(tcpfd, &rset)){
len = sizeof(cliaddr);
Sock = accept(tcpfd, (struct sockaddr*)&cliaddr, &len);
pq++;
Queue[pq] = dup(Sock);
printf("%d\n",Sock);
if ((childpid = fork()) == 0) {
Access_Request();
while(1) { //rest of the code
}
I'm writting a simple socket server/client app.
I run into interesting problem. In my server code I call accept on non-blocking socket like this
while ((res = accept(m_sd, NULL, 0)) >= 0) { // There are new clients
... // Saving res as fd etc
}
Everything works perfectly - when there is a client, accept returns a valid file descriptor. However when a first client disconnects and second client connect, accept returns 0 - which is a valid FD, howerver all operation on this descriptor fails. This happens also for the next clients - accept is returning 0. After random number of clients, acceptr returns a "valid" (non-zero) descritpor, and than it repeats.
Note: When there are no clients, accept returns -1 as expected with errno EAGAIN - which is completly fine. When accept returns zero, errno is not set.
What could cause such a weird behavior?
Here's how I create server socket:
struct sockaddr_in serv_addr;
m_sd = socket(AF_INET, SOCK_STREAM, 0);
if (m_sd < 0){}
//Handle error
bzero((char *)&serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(port);
int optval = 1;
setsockopt(m_sd, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof optval);
if (bind(m_sd, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) {
// Handle error
}
fcntl(m_sd, F_SETFL, O_NDELAY); // Make socket non-blocking
listen(m_sd, 50);
And here's how I create client:
int rc;
struct sockaddr_in serveraddr;
struct hostent *hostp;
m_sd = socket(AF_INET, SOCK_STREAM, 0);
if (m_sd < 0)
// Handle error
memset(&serveraddr, 0, sizeof(struct sockaddr_in));
serveraddr.sin_family = AF_INET;
serveraddr.sin_port = htons(port);
hostp = gethostbyname(hostname.c_str());
if (hostp == NULL)
// Handle error
memcpy(&serveraddr.sin_addr, hostp->h_addr, sizeof(serveraddr.sin_addr));
// connect to serveraddr
rc = connect(m_sd, (struct sockaddr*)&serveraddr, sizeof(serveraddr));
if (rc < 0)
//Handle error
//set to nonblocking
fcntl(m_sd, F_SETFL, fcntl(m_sd, F_GETFL, 0) | O_NONBLOCK);
This is the code, where I wait for new data from any client:
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = std::chrono::duration_cast<std::chrono::microseconds>(timeout).count();
fd_set rfds;
FD_ZERO(&rfds);
FD_SET(m_sd, &rfds);
int end = m_sd;
for (const auto& s : m_clients) {
end = std::max(end, s.second.m_sd);
FD_SET(s.second.m_sd, &rfds);
}
int retval = select(end + 1, &rfds, NULL, NULL, &tv);
if (retval == -1) {
// Error handling
}
return retval > 0; // There is pending data from client
Problem solved! I was accidentally closing fd 0 in my code, which caused this weird behaviour. Now everything works. Thanks for helping - you've showed me the right way
I want to implement at the client side non-blocking socket with select function. But it doesn't work as expected. In the code below it never runs into else , rv is always 1 and when nothing is on the socket application stops for a while and continue when another messages is on the socket. I don't want that behavior , I want that client sends back message to the server when there is nothing on the socket to recvfrom.
fd_set readfds;
fcntl(sd, F_SETFL, O_NONBLOCK);
while (1) {
FD_ZERO(&readfds);
FD_SET(sd, &readfds);
rv = select(sd + 1, &readfds, NULL, NULL, NULL);
if(rv == 1){
nbytes = recvfrom(sd, buf, RW_SIZE, 0, (struct sockaddr *) &srv_addr, &addrlen);
} else {
printf("I'm never here so I can't send message back to the server!\n");
}
}
with struct timeval:
fd_set readfds;
fcntl(sd, F_SETFL, O_NONBLOCK);
struct timeval tv;
while (1) {
FD_ZERO(&readfds);
FD_SET(sd, &readfds);
tv.tv_sec = 0;
tv.tv_usec = 0;
rv = select(sd + 1, &readfds, NULL, NULL, &tv);
if(rv == 1){
nbytes = recvfrom(sd, buf, RW_SIZE, 0, (struct sockaddr *) &srv_addr, &addrlen);
} else {
printf("I'm always here like now ! \n");
}
}
You set the timeout (last parameter of select) to NULL, which means it will only return once data are available on the socket (or interrupt). You need to set a timeout it should wait. The timeout might be 0 if you don't want to wait, but 0 means to use a struct timeval* with tv_sec=0 and tv_usec=0 and not use a struct timeval* of NULL like you did.