I'm busy with this for 2 days now and still don't understand it. What does select() do in this code?
I know that if there is an incoming connection that can be accepted, the copy.fd_array[] will contain ListenSocket but when the while loop repeats it's still there. So how do we know if a client is disconnected? What does fd_set copy contain after the select() call?
fd_set master;
FD_ZERO(&master);
FD_SET(ListenSocket, &master);
while (1)
{
fd_set copy = master;
select(FD_SETSIZE, ©, NULL, NULL, NULL);
for (int i = 0; i < FD_SETSIZE; i++)
{
// If new connection
if (FD_ISSET(ListenSocket, ©))
{
printf("[+] New connection\n");
// Accept connection
SOCKET AcceptedClient = accept(ListenSocket, NULL, NULL);
FD_SET(AcceptedClient, &master);
// Send welcome message to client
char buff[128] = "Hello Client!";
send(AcceptedClient, buff, sizeof(buff), 0);
}
}
}
I'm busy with this for 2 days now and still don't understand it.
It's no wonder that you don't understand the code: The code in the example is nonsense.
Checking the ListenSocket should be done outside the for loop. And FD_ISSET must also be checked for the connections accepted using accept.
The correct code inside the while loop would look like this:
fd_set copy = master;
select(FD_SETSIZE, ©, NULL, NULL, NULL);
// If new connection
if (FD_ISSET(ListenSocket, ©))
{
...
}
for (int i = 0; i < FD_SETSIZE; i++)
{
// If an existing connection has data
// or the connection has been closed
if ((i != ListenSocket) && FD_ISSET(i, ©))
{
nBytes = recv(i, buffer, maxBytes, 0);
// Connection dropped
if(nBytes < 1)
{
close(i); // other OSs (Linux, MacOS ...)
// closesocket(i); // Windows
FD_CLR(i, &master);
}
// Data received
else
{
...
}
}
}
I know that if there is an incoming connection that can be accepted, the copy.fd_array[] will contain ListenSocket but when the while loop repeats it's still there.
What does fd_set copy contain after the select() call?
First of all: Before calling select(), copy.fd_array[] must contain all socket handles that you are interested in. This means it must contain ListenSocket and all handles returned by accept().
master.fd_array[] contains all these handles, so fd_set copy = master; will ensure that copy.fd_array[] also contains all these handles.
select() (with NULL as last argument) will wait until at least one socket becomes "available". This means that it will wait until at least one of the following conditions is true:
A connection accepted using accept() is closed by the other side
a connection accepted using accept() has data that can be received
there is a new connection that can be accepted using accept(ListenSocket...)
As soon as one condition is fulfilled, select() removes all other handles from copy.fd_array[]:
ListenSocket is removed from copy.fd_array[] if there is no incoming connection
A handle returned by accept() is removed from that array if the connection has neither been closed nor new data is available
If two events happen the same time, copy.fd_array[] will contain more than one handle.
You use FD_ISSET() to check if some handle is still in the array.
So how do we know if a client is disconnected?
When you detect FD_ISSET(i, ©) for a value i that has been returned by accept(), you must call recv() (under Linux read() would also work):
If recv() returns 0 (or negative in the case of errors), the other computer has dropped the connection. You must call close() (closesocket() on Windows) and remove the handle from copy.fd_array[] (this means: you must remove it from master.fd_array[] because of the line fd_set copy = master;).
If recv() returns a positive value, this is the number of bytes that have been received.
Related
I am reading UNIX Network Programming by R.Stevens and the following snippet of code with relevant parts only is taken from it. It is an echo server.
fd_set rset, allset;
int nready, client[FD_SETSIZE];
...
for( ; ; ) {
rset = allset;
nready = select(maxfd + 1, &rset, NULL, NULL, NULL);
....
for( i = 0; i <= maxi; i++){
if ( (sockfd = client[i]) <= 0)
continue;
if (FD_ISSET(sockfd, &rset)){
if( (n = read(sockfd, buf, MAXLINE)) == 0){
close(sockfd);
FD_CLEAR(sockfd, &allset);
client[i] = -1;
} else
writen(sockfd, buf, n);
...
}
}
I'll describe briefly the variables: client is an array holding file descriptors assigned to connected clients; -1 represents a free entry. nready is the number of fds ready to be read. rset is a structure holding bits that specifies which fd is ready. allset is a structure of the same type representing fds that have to be tested by select().
The inner for loop checks for incoming data from each connected client (this is tested through FD_ISSSET macro). If there's any pending data, the server writes it back to the client. If read() returns 0, that means that the client has sent a FIN to the server, so it terminates the connection with close().
Now the question: the author says that there is a problem with the sever I just showed. Indeed, consider a malicious client that connects to the server, sends one byte of data, other than a newline, and then goes to sleep. The server will call read(), reading that byte, and then block in the next call to read(), waiting for more data from this client, thus denying service to to all other clients. I don't get why the server should block in the next call to read(): before the next call to read(), select() will return the ready states for each connected socket, and since our client isn't sending more data (that one byte has already been consumed), the bit for ready state of this client isn't set. Hence, the if block for the client isn't entered and read() isn't called. This is fine, unless select() can return ready for a socket with no data queued, which I think is impossible. So where am I wrong?
I have a client server client connection where the server reads the message sent by the client every 1 second but I do not want the server to keep on waiting for a message for too long. I tried using the select() function but the server continues waiting for some message to read. Could anyone tell me what I am doing wrong please?
fd_set master;
fd_set read_fds;
FD_ZERO(&master);
FD_ZERO(&read_fds);
FD_SET(sock, &master);
while (1) {
bzero(message, 256);
sleep(1);
read_fds = master;
if(select(sock+2, &read_fds, NULL, NULL, NULL) < 0)
error("ERROR reading");
//if there is any data to read from the socket
else if(FD_ISSET(sock, &read_fds)){
n = read(sock, buffer, 256);
c = buffer[0];
printf("1st char is %c",c);
}//close else if statement
else printf("Nothing was read");
}//close while loop
A few comments that are too long to fit in the comments...
The first parameter to select really only needs to be sock+1.
By passing NULL for the timeout, select will block indefinitely (so you might as well have just called read).
When select does tell you that sock is ready for reading, there may only be one byte present, even if the other end wrote more then that. You will have to loop, reading the socket until you get the number of bytes you want. You will have to decide if you want the timeout only while waiting for the first byte, or if you want to timeout in the middle (I.e. is the select inside or outside the loop).
Normally, when using select, you only want to block in select, and not in read, so the socket should be put in non-blocking mode, and you should be prepared for the read to fail with EWOULDBLOCK.
If you do time out, do you want to close the connection? If you got half way through a message, you can't just throw away the half and keep going because you would now be expecting the next byte to be another start of message when it will now be a middle.
I'm actually having troubles with a client-client application. Everything, in this question, is related to a Unix network programming environment.
This is my situation:
I have a client (called C1 from now on) that calls a listen() on a socket.
C1 puts the listenfd socket associated with the previous listen() call in an appropriate fd_set variable and calls a select() on it.
Once it receives a new incoming connection from another client (called C2 from now on), the select() procs, the connection is successfully created with accept() and the clients C1-C2 start communicating.
Let's call the accept() returned int accfd and the connect()returned int connfd.
Once they are done, both C1-C2 close the relative sockets with close(connfd),close (accfd).
Once done, both clients have the opportunity whether to send/receive data again or not. If C1 choose to restart its send/receive routine, the fd_set is zeroed (using the FD_ZERO() macro) and the listenfd is put again in the fd_set variable associated with the previously calledselect(). The thing is, if C2 tries to establish a new connection with C1, the second connect() doesn't make theselect() proc in C1,even if the connect() call made by C1 succeeds. This doesn't happen, if a third client (C3) tries to connect() to C1.
What I'm trying to understand, is how can I close a connection with a client and open a new connection with the same client at a different time.
Note that I don't want the clients to keep the firstly created connection after their send/receive routine is done. I want to create a new connection with both clients.
Here's the client code, note that I omitted obvious or useless parts of the code:
int nwrite,nread,servsock,listenfd,clsock[10],mastfd,maxfd,s=0,t=0,i=0,j=0;
int act,count=0;
for (i=0;i<10;i++)
clsock[i]=socket(PF_INET,SOCK_STREAM,0); //clsock is an array of sockets. Each time C2 tries to connect to a generic C1 client, it uses the clsock[count] int. count is incremented everytime a connection is closed.
for (i=0;i<10;i++)
if(setsockopt(clsock[i],SOL_SOCKET,SO_REUSEADDR,(char *)&opt2,sizeof(opt2))<0)
{
perror("setsockopt");
exit(-1);
}
listenfd=socket(PF_INET,SOCK_STREAM,0); //this is the listening socket
if(setsockopt(listenfd,SOL_SOCKET,SO_REUSEADDR,(char *)&opt,sizeof(opt))<0)
{
perror("setsockopt");
exit(-1);
}
if (listenfd<0)
{
perror("Listenfd: ");
exit(-1);
}
if (bind(listenfd,(struct sockaddr*)&cl2addr,sizeof(cl2addr))<0)
{
perror("Binding: ");
exit(-1);
}
if (listen(listenfd,100)<0)
{
perror("Listening: ");
exit(-1);
}
do
{
do
{
FD_ZERO(&readfd);
FD_SET(STDIN_FILENO,&readfd);
FD_SET(listenfd,&readfd); //the listenfd socket is added
FD_SET(servsock,&readfd);
[... Maxfd and the elaps structure are set....]
act=select(maxfd,&readfd,NULL,NULL,&elaps); //maxfd is calculated in another piece of code. I'm sure it is right.
system("reset");
if (FD_ISSET(listenfd,&readfd)) //when the listen procs, the loop ends for C1.
{
[...exits from the first do-while loop...]
}
if (FD_ISSET(STDIN_FILENO,&readfd)) //this is where c2 exits from the loop
{
s=1;
do some things here.
}
[.....some useless code here .....]
}
while (s!=1); //this is the condition used by c1/c2 to exit the loop
if (t==1) //this is what C1 runs, having t=1.
{
if ((mastfd=accept(listenfd,(struct sockaddr *) &cl2addr,&cllen))<0) //C1 accepts a generic connection
{
perror("Accept: ");
exit(-1);
}
[....do some things...]
if (close(mastfd)<0) //Once done, it closes the actually connected socket
{
perror("Error closing mastfd");
_exit(-1);
}
}
else //this is what C2 runs
{
claddr.sin_addr.s_addr=inet_addr(ipbuff); // ipbuff is C1 port
claddr.sin_port=htons(sprt); //sprt is C1 port
if (connect(clsock[count],(struct sockaddr *) &claddr,sizeof(claddr))<0) //create a connection between C1 and C2
{
perror("Connect: ");
printf("ERROR: %s",strerror(errno));
exit(-1);
}
[....do some things...]
if (close(clsock[count])<0)
{
perror("Error closing socket!");
_exit(-1);
}
count++; //increment count to be able to create a new connection and not to re-use the same socket in the clsock[count] array.
}
if (menu==1)
{
memset(&claddr,0,sizeof(claddr)); //this was when my brain was about to pop off
fflush(stdin);
fflush(stdout);
t=0;
s=0;
num_usr=0;
system("reset");
FD_ZERO(&readfd); //this is when my brain totally popped off
FD_CLR(listenfd,&readfd);
FD_CLR(servsock,&readfd);
FD_CLR(STDIN_FILENO,&readfd);
FD_SET(listenfd,&readfd);
FD_SET(servsock,&readfd);
FD_SET(STDIN_FILENO,&readfd);
}
} while (menu==1)
Thank you all, if the question isn't well proposed or written, please let me know that. I'm sorry for my inexperience and for my English, I'm just getting started with network programming. Thank you so much, in advance, for your help.
I don't see any code calculating a new maxfd value when preparing readfd for select(). When calling select(), maxfd must be the value of the highest descriptor + 1 of all the provided fd_sets. There is no code shown that calculates maxfd each time readfd is reset. You need to do something like this:
FD_ZERO(&readfd);
FD_SET(STDIN_FILENO, &readfd);
maxfd = STDIN_FILENO;
FD_SET(listenfd, &readfd);
maxfd = max(listenfd, maxfd);
FD_SET(servsock, &readfd);
maxfd = max(servsock, maxfd);
act = select(maxfd+1, &readfd, NULL, NULL, &elaps);
Also keep in mind that on some systems, select() modifies the timeval structure to report the amount of time remaining after select() exits, so you should reset elaps every time you call select().
I solved my issue removing the line at the end of the code
memset(&claddr,0,sizeof(claddr));
Even though it solved my problem, I don't really know why this wasn't making the code work. If someone could explain that, it would be great.
So I'm building a chat server, now I'm trying to echo all the messages from the client. Currently as soon as I get the message, I send it back within readData(). However, as soon as I send it, select() notifies the write_fds and sendData() is called, even I already called send()
Most of my calls to send data would be inside the readData().
Is this the right way of using select() and write_fds?
How can I notify select() that I want to send data without two calls to send()?
It seems redundant to me having to deal with two calls to send().
int readData(int j){
// get message from the client
recv(j, client_buffer , 6000 , 0);
// echo message to the client
send(j, client_buffer, strlen(client_buffer));
}
int sendData(int j){
send(j, buf, nbytes, 0);
}
for(;;){
read_fds = master;
write_fds = master;
if(select(fdmax+1, &read_fds, &write_fds, NULL, NULL) == -1){
exit(4);
}
for(i = 0; i <= fdmax; i++){
if(FD_ISSET(i, &read_fds)){
if(i == listener){
// handle new connections
addrlen = sizeof remoteaddr;
newfd = accept(listener, (struct sockaddr *)&addr, &addrlen);
FD_SET(newfd, &master);
if(newfd > fdmax) fdmax = newfd;
}else{
// we got some data from a client
readData(i);
}
}
if(FD_ISSET(i, &write_fds)){
if(i != listener){
// send data when notified
sendData(i);
}
}
}
}
I would not suggest calling sendData() directly inside of readData(). They should be kept separate. Have readData() return the received data to the caller, and let the caller decide what to do with it. If the caller wants to send data, it can then call sendData() as needed.
To address the select() issue, you need to create a per-socket buffer for outgoing data. And make sure the socket is running in non-blocking mode.
If sendData() is called when the buffer is empty, send() as much of the caller's data as possible. send() will return how many bytes it actually accepted. If send() reports a EWOULDBLOCK or EAGAIN error, stop sending and append any unsent data to the end of the buffer.
If sendData() is called when the buffer is not empty, just append the new data to the end of the buffer and exit without calling send() at all.
Whenever select() reports a socket is writable, send() whatever is currently cached in that socket's buffer, if anything. For each successful send(), remove the reported number of bytes from the front of the buffer. Stop sending if the buffer is exhausted or send() fails.
Scenario: when select detect activity in one socket then below criteria happens in my code.
pseudo code:
after select i am checking in
if stdin f descriptor
do something
else if listening file descriptor
newFDescriptor = accept sockFDescriptor, (struct sockaddr *) &clientAddress, &clientAddressSize
FD_SET (new file descriptor)
send connected response to peer
// data from connected peer
else {
receive data
}
But every time i send something from a peer to other it creates new connection with new filedescriptor. i.e. it doesn't recogonize data in already created filedescriptor for this peer.
peer 1 to peer 2 (new file descriptor created)
peer 1 to peer 2 (again new connection)
It is receiving all data on the listening file descriptor.
If the peer insists on creating a new connection there's nothing you can do about it at the server end.
"It is receiving all data on the listening file descriptor" doesn't begin to make sense. It's impossible. The listening file descriptor can't do anything except accept connections.
I agree with jedwards (+1) -- you should read the Beej's Guide to get you started.
In the mean time, here is some quick input that might help in avoiding the error you are running into. My guess is that you are mixing up the file descriptors.
You would need to add the new file descriptors (the ones from the accept() call) into a list and then use them (also) to populate the fd set for the next select call. The listener fd is only for establishing new connections and subsequent accept() -- you should not be calling receive or send on that fd (let us call it server_fd).
Here is a quick example code that stores all connections in an array, then you can set the fd as follows. For indices of the array that do not have a valid fd, it uses -1.
FD_ZERO(&read_fd_set);
/* Set the fd_set before passing it to the select call */
for (i=0;i < MAX_CONNECTIONS;i++) {
if (all_connections[i] >= 0) {
FD_SET(all_connections[i], &read_fd_set);
}
}
ret_val = select(FD_SETSIZE, &read_fd_set, NULL, NULL, NULL);
Once the select returns, you can check if the fd with the read-event is the server fd and if so, you can call accept() to get the new fd -- you need to add it to the array. Something like this:
if (FD_ISSET(server_fd, &read_fd_set)) {
new_fd = accept(server_fd, (struct sockaddr*)&new_addr, &addrlen);
if (new_fd >= 0) {
printf("Accepted a new connection with fd: %d\n", new_fd);
for (i=0;i < MAX_CONNECTIONS;i++) {
if (all_connections[i] < 0) {
all_connections[i] = new_fd;
break;
}
}
}
}