print value of fds in FD_SET - c

Is there anyway of printing the state of a socket in a fd_set?
Say i have this code:
int main(int argc, char * argv[]) {
int sockfd, newfd, i;
struct sockaddr_un sv_addr, cli_addr;
int sv_len, cli_len;
fd_set testmask, mask;
if ((sockfd = socket(AF_UNIX,SOCK_STREAM,0))<0) {
perror("Error creating socket");
exit(-1);
}
bzero((char*)&sv_addr,sizeof(sv_addr));
sv_addr.sun_family = AF_UNIX;
strcpy(sv_addr.sun_path,UNIXSTR_PATH);
sv_len=sizeof(sv_addr.sun_family)+strlen(sv_addr.sun_path);
unlink(UNIXSTR_PATH);
if(bind(sockfd,(struct sockaddr*)&sv_addr,sv_len)<0) {
perror("Error binding socket");
exit(-1);
}
listen(sockfd, 15);
FD_ZERO(&testmask);
FD_SET(sockfd,&testmask);
for(;;) {
mask = testmask;
select(MAXSOCKS,&mask,0,0,0);
if(FD_ISSET(sockfd,&mask)) {
cli_len = sizeof(cli_addr);
newfd = accept(sockfd, (struct sockaddr*)&cli_addr, &cli_len);
echo(newfd);
close(newfd);
}
for(i=0;i<MAXSOCKS;i++) {
if (FD_ISSET(i, &mask)) {
close(i);
FD_CLR(i, &mask);
}
}
}
close(sockfd);
return 0;
}
Everything is working in my program (its an echo server, the client sends a line and the server just echos it back).
I would like to, after the select call, print in the server terminal something like;
00011011011
This means, print the socks that are ready to be handled.
Is there anyway i could do this?
Also, what should i do in the end of the for loop? I know i have to somehow clear the fd_set. The way i did it (the small for loop closing and FD_CLR the fd_set) its correct? Or i should i do it another way?
PS: Sorry for my english or any mistakes. :)

[This does not answer your question, but refers to a comment to the OP and is too long for another comment]
From man select:
nfds is the highest-numbered file descriptor in any of the three sets, plus 1.
nfds is not a constant! The man-pages does not read:
[...] the highest-possible-numbered file descriptor [...]
nfds dynamically has to describe the fd_sets passed to select().
int nfds = sockfd + 1;
for(;;) {
mask = testmask;
select(nfds, &mask, 0, 0, 0);
if(FD_ISSET(sockfd,&mask)) {
cli_len = sizeof(cli_addr);
newfd = accept(sockfd, (struct sockaddr*)&cli_addr, &cli_len);
echo(newfd);
close(newfd);
}
for(i = 0; i < nfds; ++i) {
if (FD_ISSET(i, &mask)) {
close(i);
FD_CLR(i, &mask);
}
}
}
Adjust nfds for every socket descriptor being add to fd_set passed to select().

After select call you checked the sockfd. If that's true, means that a client try to connect your server. Then you accept the connection.
newfd = accept(sockfd, (struct sockaddr*)&cli_addr, &cli_len);
newfd is the fd number you know, between client and server. Here, you don't still read the clientfd(newfd) data. After the connection accepted, you read data on the clentfd like that
read(newfd,buffer,sizeof(buffer))
Your data sent from client, now in buffer. Then maybe you can echo or write() in clientfd.
Your code sent to client , fd number between server and client.
Also if listen always the client(s), after the accept connection, you have to set your client fd in readfds(mask in your code) like FDSET(newfd,&mask)
Then you can listen the client(s) always

Related

Use of select with multiple sockets

I would like to manage the sockets so that I can only accept connections once and then listen to the file descriptor of each socket and intercept the messages sent.
After a lot of trouble, I don't understand why my code systematically goes into the FD_ISSET() function while I put the file descriptor in my fd_set.
From what I've seen on other topics, it seems that you have to accept the connection systematically? Except that if I do that my code goes into an infinite wait.
int main(int argc, char **argv) {
int socket = 0;
// My client 1 socket
int pOneSocket = 0;
// My client 2 socket
int pTwoSocket = 0;
struct sockaddr_in serv_addr;
int test = 0;
// Create the server socket to listen
createSocket(&socket,1337, &serv_addr);
fd_set readfds;
FD_ZERO(&readfds);
//Enter the server socket to readfs
FD_SET(socket, &readfds);
while (1) {
// Wait for changes on the server socket
test = select(1024, &readfds, NULL, NULL, NULL);
// If the socket fd is not on the readfs, then accept a new connecion
if (FD_ISSET(test, &readfds) == 0)
accept_connection(pOneSocket ? &pTwoSocket : &pOneSocket,
&socket, &serv_addr, &readfds);
// Handle the connection and retreive command from file descriptor
handleConnection(&test, &pOneSocket);
}
return 0;
}
On my handleConnection, I have all the code related to retreive commands:
void accept_connection(int *new_socket, int *socket, struct sockaddr_in *address, fd_set *readfds) {
int addrlen = sizeof(*address);
if((*new_socket = accept(*socket, (struct sockaddr *) address, (socklen_t *) &addrlen)) < 0) {
fprintf(stderr, "\n Accept failed \n");
exit(84);
}
FD_SET(*new_socket, readfds);
}
Currently, my code works to accept a connection, but gets stuck on the first command sent from the client.
select() modifies the fd_set object(s) you pass to it, so each loop iteration needs to make a copy of them, not pass the original fd_sets you're using to track your connections. Something like this:
FD_SET(socket, &readfds);
while (1) {
// Wait for changes on the sockets we want to read
fd_set ready = readfds;
test = select(1024, &ready, NULL, NULL, NULL);
// If the socket fd is ready, then accept a new connecion
if (FD_ISSET(socket, &ready))
accept_connection(pOneSocket ? &pTwoSocket : &pOneSocket,
&socket, &serv_addr, &readfds);
... check the other fds that have previously been accepted to see if they're ready

Unix Network Programming Clarification

I was going through the classic book Unix Network Programming, when I stumbled upon this program (Section 6.8, page 179-180)
#include "unp.h"
int
main(int argc, char **argv)
{
int i, maxi, maxfd, listenfd, connfd, sockfd;
int nready, client[FD_SETSIZE];
ssize_t n;
fd_set rset, allset;
char buf[MAXLINE];
socklen_t clilen;
struct sockaddr_in cliaddr, servaddr;
listenfd = Socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(SERV_PORT);
Bind(listenfd, (SA *) &servaddr, sizeof(servaddr));
Listen(listenfd, LISTENQ);
maxfd = listenfd; /* initialize */
maxi = -1; /* index into client[] array */
for (i = 0; i < FD_SETSIZE; i++)
client[i] = -1; /* -1 indicates available entry */
FD_ZERO(&allset);
FD_SET(listenfd, &allset);
for ( ; ; ) {
rset = allset; /* structure assignment */
nready = Select(maxfd+1, &rset, NULL, NULL, NULL);
if (FD_ISSET(listenfd, &rset)) { /* new client connection */
clilen = sizeof(cliaddr);
connfd = Accept(listenfd, (SA *) &cliaddr, &clilen);
for (i = 0; i < FD_SETSIZE; i++)
if (client[i] < 0) {
client[i] = connfd; /* save descriptor */
break;
}
if (i == FD_SETSIZE)
err_quit("too many clients");
FD_SET(connfd, &allset); /* add new descriptor to set */
if (connfd > maxfd)
maxfd = connfd; /* for select */
if (i > maxi)
maxi = i; /* max index in client[] array */
if (--nready <= 0)
continue; /* no more readable descriptors */
}
for (i = 0; i <= maxi; i++) { /* check all clients for data */
if ( (sockfd = client[i]) < 0)
continue;
if (FD_ISSET(sockfd, &rset)) {
if ( (n = Read(sockfd, buf, MAXLINE)) == 0) {
/*4connection closed by client */
Close(sockfd);
FD_CLR(sockfd, &allset);
client[i] = -1;
} else
Writen(sockfd, buf, n);
if (--nready <= 0)
break; /* no more readable descriptors */
}
}
}
}
The author mentions that this program is not safe against DOS attack. Quoting from the book,
"Unfortunately, there is a problem with the server that we just showed. Consider what happens if a malicious client connects to the server, sends one byte of data (other than a newline), and then goes to sleep. The server will call read (system call), which will read the the single byte of data from the client and then block in the next call to read, waiting for more data from this client. The server is then blocked by this one client, and will not service any other clients until malicious client either sends a newline or terminates"
I am not sure if I understand this correctly. Why will the read system call be called the second time for this malicious client, since it only sent 1 byte of data, that gets notified by the first call to select. The subsequent calls to select will never have this malicious file descriptor set as there is no activity. Am I missing something here?
My guess here is that there is a typo in the code, instead of Read, it should be some version of Readline method mentioned at other places in the book.
Note: The code contains Read and Select (with capital R and S), which are nothing but error handled wrappers of read and select system call
Yes, it seems likely that it was intended to be Readline.
In the downloadable source code that file is tcpcliserv/tcpservselect01.c and there is a corresponding .lc file (with line number annotations) which uses Readline instead of Read, and it was Readline in the second edition of the book (source code). About the only way to make sense of the parenthetic comment "(other than a newline)" is to assume that the intended read function reads up to a newline.
Oddly, it hasn't been reported in the errata. Maybe you should do so.
I think that the problem that he was pointing out was that, as you noted in your NOTE, this code uses Read which is a wrapper of read. My guess, since I'm not about to dig out my copy of the book right now, is that Read will try to call read a second time to finish receiving the data that is never coming.

Implementing poll() on a TCP server's read/write

I need this server to be able to listen for and establish new connections with clients while simultaneously writing to existing connections.. ie. Asynchronous non-blocking i/o. I've been told to use poll() but after spending an inordinate amount of time simply trying to grasp socket programming, I'm still unsure how implement the poll() function.
int sockfd;
int main(int argc, char *argv[])
{
int newsockfd, portno;
socklen_t clilen;
char buffer[256];
struct sockaddr_in serv_addr, cli_addr;
int n;
if (argc < 2) {
fprintf(stderr,"ERROR, no port provided\n");
exit(1);
}
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0)
error("ERROR opening socket");
bzero((char *) &serv_addr, sizeof(serv_addr));
portno = atoi(argv[1]);
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
if (bind(sockfd, (struct sockaddr *) &serv_addr,
sizeof(serv_addr)) < 0)
error("ERROR on binding");
listen(sockfd,5);
clilen = sizeof(cli_addr);
while(1){
newsockfd = accept(sockfd,
(struct sockaddr *) &cli_addr,
&clilen);
if (newsockfd < 0)
error("ERROR on accept");
// READ READ READ READ READ READ READ READ READ READ READ READ READ READ READ READ
bzero(buffer,256);
n = read(newsockfd,buffer,255);
if (n < 0) error("ERROR reading from socket");
printf("Here is the message: %s\n",buffer);
// WRITE WRITE WRITE WRITE WRITE WRITE WRITE WRITE WRITE WRITE WRITE WRITE WRITE
n = write(newsockfd,"I got your message",18);
if (n < 0) error("ERROR writing to socket");
close(newsockfd);
}
return 0;
}
My understanding is that I need to build something like this:
// Set up array of file descriptors for polling
struct pollfd ufds[2];
ufds[0].fd = sockfd;
ufds[0].events = POLLIN;
ufds[1].fd = newsockfd;
ufds[1].events = POLLOUT;
and use poll(ufds,2,2000); inside the loop to check whether sockfd or newsockfd have any activity, in which case I use the appropriate read or write.. If anybody could give me some guidance I'd be very appreciative.
The kernel will fill in the events that occurred in the revents field of your struct pollfd array.
From the manual page:
The field revents is an output parameter, filled by the kernel with the events that actually occurred. The bits returned in revents can include any of those specified in events, or one of the values POLLERR, POLLHUP, or POLLNVAL. (These three bits are meaningless in the events field, and will be set in the revents field whenever the corresponding condition is true.)
If you want event notifications for accepted connections, then you need to either reserve space in advance or resize the struct pollfd array for every connection.
You'll need some way to differentiate the listening socket. You could store it in index zero of your array.
int i, n;
n = poll(ufds, num_fds_in_array, timeout_value);
/* errors or timeout? */
if (n < 1)
;
for (i = 0; i < num_fds_in_array; i++) {
/* were there any events for this socket? */
if (!ufds[i].revents)
continue;
/* is it our listening socket? */
if (!i) {
if (ufds[0].revents & POLLIN)
/* call accept() and add the new socket to ufds */
else
/* error */
continue;
}
/* is there incoming data on the socket? */
if (ufds[i].revents & POLLIN)
/* call recv() on the socket and decide what to do from there */
}
The POLLOUT flag is used to signal when the sending data on the socket will not block the caller.
For non-blocking I/O, I'd use a more powerful API since it requires more bookkeeping to do reliably. See the next paragraph.
Unfortunately, there's no room for auxiliary per-connection data to store state when using poll. There are alternatives available depending on your platform, e. g. epoll for Linux, kqueue for *BSD, and a handful of options for Windows. If you want to use poll with context data, you'd have to use a data structure that can be searched using the file descriptor or array index.
Why don't u use libevent? It totally asynchronous and non-blocking.
http://libevent.org/

send() and sendto() blocking in a file transfer program

it seems that when i use send() function (in a TCP file transfer program) like this
while((count = recv(socketConnection, buff, 100000, 0))>0)
myfile.write(buff,count);
the function recv() just waits untill the whole data comes and exits the loop when it is no more receiving any data but in a similar program for a UDP program
while((n = recvfrom(sockfd,mesg,1024,0,(struct sockaddr *)&cliaddr,&len))>0)
myfile.write(mesg,n);
the recvfrom() function just blocks and does not exit the loop for some reason, as far as i know both recv() and recvfrom() are blocking right?? Then why the difference. Does it have something to do with the functions or just the nature of TCP,UDP(which i guess is not a reason)??
P.S. Please help me understand this guys, I'm a newbie to socket programming and networking.
EDIT: full server program for both TCP and UDP
UDP server (with recvfrom() )
int i=0;
int sockfd,n;
struct sockaddr_in servaddr,cliaddr;
socklen_t len;
char mesg[1024];
sockfd=socket(AF_INET,SOCK_DGRAM,0);
bzero(&servaddr,sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr=htonl(INADDR_ANY);
servaddr.sin_port=htons(32000);
bind(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr));
ofstream myfile;
// fcntl(sockfd,F_SETFL,O_NONBLOCK);
myfile.open("2gb",ios::out);
while((n = recvfrom(sockfd,mesg,1024,0,(struct sockaddr *)&cliaddr,&len))>0)
myfile.write(mesg,n);
TCP (recv() ) server program
struct sockaddr_in socketInfo;
char sysHost[MAXHOSTNAME+1]; // Hostname of this computer we are running on
struct hostent *hPtr;
int socketHandle;
int portNumber = 8070;
//queue<char*> my_queue;
bzero(&socketInfo, sizeof(sockaddr_in)); // Clear structure memory
gethostname(sysHost, MAXHOSTNAME); // Get the name of this computer we are running on
if((hPtr = gethostbyname(sysHost)) == NULL)
{
cerr << "System hostname misconfigured." << endl;
exit(EXIT_FAILURE);
}
if((socketHandle = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
close(socketHandle);
exit(EXIT_FAILURE);
}
// std::cout<<"hi starting server";
socklen_t optlen;
int rcvbuff=262144;
optlen = sizeof(rcvbuff);
socketInfo.sin_family = AF_INET;
socketInfo.sin_addr.s_addr = htonl(INADDR_ANY);
socketInfo.sin_port = htons(portNumber); // Set port number
if( bind(socketHandle, (struct sockaddr *) &socketInfo, sizeof(socketInfo)) < 0)
{
close(socketHandle);
perror("bind");
exit(EXIT_FAILURE);
}
listen(socketHandle, 1);
int socketConnection;
if( (socketConnection = accept(socketHandle, NULL, NULL)) < 0)
{
exit(EXIT_FAILURE);
}
close(socketHandle);
time_start(boost::posix_time::microsec_clock::local_time());
int rc = 0; // Actual number of bytes read
int count=0;
char *buff;
int a=100000;
buff=new char[a];
ofstream myfile;
myfile.open("345kb.doc",ios::out|ios::app);
if(myfile.is_open())
{
long i=0;
while((count = recv(socketConnection, buff, 100000, 0))>0)
{
myfile.write(buff,count);
}}
the function recv() just waits untill the whole data comes and exits the loop when it is no more receiving any data
recv() on a TCP connection returns 0 when the sending side has closed the connection and this is the condition for your loop to terminate.
for a UDP program the recvfrom() function just blocks and does not exit the loop for some reason,
Because UDP is a connection-less protocol hence there is no special return code from recv() for a closed UDP connection. Unless someone sends you a 0-length datagram.
recv() will end the loop because at the other side the socket is closed, so recv() will return 0 (socket gracefully closed) whereas, recvfrom that does not have that signal, it does not know about closing, because it's an unconnected socket. It's stay there until it receives a packet or timeout, with UDP you need a way to tell that the communication is over (finish).

Linux server socket - Bad file descriptor

I have a problem with a server socket under Linux. For some reason unknown to me the server socket vanishes and I get a Bad file descriptor error in the select call that waits for an incomming connection. This problem always occurs when I close an unrelated socket connection in a different thread. This happens on an embedded Linux with 2.6.36 Kernel.
Does anyone know why this would happen? Is it normal that a server socket can simply vanish resulting in Bad file descriptor?
edit:
The other socket code implements a VNC Server and runs in a completely different thread. The only thing special in that other code is the use of setjmp/longjmp but that should not be a problem.
The code that create the server socket is the following:
int server_socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
struct sockaddr_in saddr;
memset(&saddr, 0, sizeof(saddr));
saddr.sin_family = AF_INET;
saddr.sin_addr.s_addr = htonl(INADDR_ANY);
saddr.sin_port = htons(1234);
const int optionval = 1;
setsockopt(server_socket, SOL_SOCKET, SO_REUSEADDR, &optionval, sizeof(optionval));
if (bind(server_socket, (struct sockaddr *) &saddr, sizeof(saddr)) < 0) {
perror("bind");
return 0;
}
if (listen(server_socket, 1) < 0) {
perror("listen");
return 0;
}
I wait for an incomming connection using the code below:
static int WaitForConnection(int server_socket, struct timeval *timeout)
{
fd_set read_fds;
FD_ZERO(&read_fds);
int max_sd = server_socket;
FD_SET(server_socket, &read_fds);
// This select will result in 'EBADFD' in the error case.
// Even though the server socket was not closed with 'close'.
int res = select(max_sd + 1, &read_fds, NULL, NULL, timeout);
if (res > 0) {
struct sockaddr_in caddr;
socklen_t clen = sizeof(caddr);
return accept(server_socket, (struct sockaddr *) &caddr, &clen);
}
return -1;
}
edit:
When the problem case happens i currently simply restart the server but I don't understand why the server socket id should suddenly become an invalid file descriptor:
int error = 0;
socklen_t len = sizeof (error);
int retval = getsockopt (server_socket, SOL_SOCKET, SO_ERROR, &error, &len );
if (retval < 0) {
close(server_socket);
goto server_start;
}
Sockets (file descriptors) usually suffer from the same management issues as raw pointers in C. Whenever you close a socket, do not forget to assign -1 to the variable that keeps the descriptor value:
close(socket);
socket = -1;
As you would do to C pointer
free(buffer);
buffer = NULL;
If you forget to do this yo can later close socket twice, as you would free() memory twice if it was a pointer.
The other issue might be related to the fact that people usually forget: file descriptors in UNIX environment start from 0. If somewhere in the code you have
struct FooData {
int foo;
int socket;
...
}
// Either
FooData my_data_1 = {0};
// Or
FooData my_data_2;
memset(&my_data_2, 0, sizeof(my_data_2));
In both cases my_data_1 and my_data_2 have a valid descriptor (socket) value. And later, some piece of code, responsible for freeing FooData structure may blindly close() this descriptor, that happens to be you server's listening socket (0).
1- close your socket:
close(sockfd);
2- clear your socket file descriptor from select set:
FD_CLR(sockfd,&master); //opposite of FD_SET
You don't distinguish the two error cases in your code, both can fail select or accept. My guess is that you just have a time out and that select returns 0.
print retval and errno in an else branch
investigate the return value of accept seperately
ensure that errno is reset to 0 before each of the system calls
In Linux once you create a connection and it get closed then you have to wait for some time before making new connection.
As in Linux, socket doesn't release the port no. as soon as you close the socket.
OR
You reuse the socket, then bad file descriptor want come.

Resources