I'm making a concurrent server/client program in C using threads. Whenever a client connects, I create a new thread to handle it.
My problem is: I want to be able to close the server, from the client. With the command '..' for example. When I type '..' in the client, I want the server to close immediately.
I thought about having a global variable, that indicates wether the server should close or not. The problem is: When the thread is created to handle the client, the main thread goes back to accept(), and it cannot check that variable. So, it will only close when a new client connects.
Any ideas on how to solve this?
Thanks!
Use select() or (e)poll() or equivalent to wait for a client to connect BEFORE you then call accept() to accept the connection. Those kind of functions allow you to specify a timeout, that will allow you to stop waiting periodically to check for other conditions, like a shutdown request. On some platforms, you can even have these functions wait on not only the listening socket but also a separate pipe that you create privately for yourself, and when you want to "wake up" your waiting loop to do something, simply write a byte into that pipe, and when the loop detects that byte arriving then it can act accordingly.
Related
I have a classic TCP server that accepts connections with system call accept().
In a specific situation I have n connections accepted and so n childs created.
I need to stop connection when generic event A occurs on the server. What is the best way to close socket in server and in client?
Now I do it such way: when event A occurs, server sends a specific message to client and then it executes close(). Client reads socket and if the message is the special message it closes.
But if I need to do something else in the client this way is very bad. What can I do?
By 'n childs created' I assume you mean n child processes, with each child process handling the TCP connection to one client program?
If so, one way to handle it would be to have your server's parent-process keep track of the process IDs (as returned by fork()) of the child processes it has spawned (via a list or lookup-table or similar). Then when you need a particular connection to go away, your server's parent-process can call kill(the_child_process_id, SIGTERM) or similar to get the child-process to go away ASAP. (SIGTERM asks nicely; if you want to go full-nuclear you could specify SIGKILL instead).
In any case, once the child-process on the server has received the signal and exited, the OS will make sure that the server's side of the TCP connection is closed, and the client's OS will then be notified that the TCP connection is closed, and based on that, the client (if it is coded correctly to handle remote-closed connections, i.e. to react to recv() returning 0 by closing its own TCP socket) will then do the right thing as well.
To begin with, I know there are ways to handle multiple client requests by forking or threading. But I cannot understand why there cannot be multiple acceptance by the server without forking or threading. accept() call can simply accept all process wish to connect to it. Why cannot the the call(accept()) go on unless a client cut its connection??
server does socket(), listen() and bind() with blocking(default) way
client does likewise by default socket() and connect()
What I think is accept's returned value will be for the recent child. But in reality it blocks until the prior client(s) cut its connection.
I wonder whether there is file-descriptor which is returned by accept() overwriting? If not, how?
There is no overwriting; accept() creates a new connected socket, and returns a new file descriptor referring to that socket - a new, distinct one each time. Of course, a server which manages all client connections without creating other threads must store all those file descriptors, e. g. in an array.
I have a server that is running a select() loop that sometimes continues blocking when the client closes the connection from its side. The select() loop handles all other read/write operations correctly and sets the correct file descriptor in the fd_set, leading me to believe that it is not an issue with the file descriptor setup on the server-side.
The way I planned on handling the client closing the connection was to have the select() break due to activity on the socket (closing it from the client-side), see that the fd was set for that socket, and then try to read from it - and if the read returned 0, then close the connection. However, because the select() doesn't always return when the client side closes the connection, there is no attempt to check the fd_set and subsequently try to read from the socket.
As a workaround, I implemented a "stop code" that the client writes to the server just before closing the connection, and this write causes the select() to break and the server reads the "stop code" and knows to close the socket. The only problem with this solution is the "stop code" is an arbitrary string of bytes that could potentially appear in regular traffic, as the normal data being written can contain random strings that could potentially contain the "stop code". Is there a better way to handle the client closing the connection from its end? Or is the method I described the general "best practice"?
I think my issue has something to do with OpenSSL, as the connection in question is an OpenSSL tunnel, and it is the only file descriptor in the set giving me issues.
The way I planned on handling the client closing the connection was to have the select() break due to activity on the socket (closing it from the client-side), see that the fd was set for that socket, and then try to read from it - and if the read returned 0, then close the connection. However, because the select() doesn't always return when the client side closes the connection, there is no attempt to check the fd_set and subsequently try to read from the socket.
Regardless of whether you are using SSL or not, select() can tell you when the socket is readable (has data available to read), and a graceful closure is a readable condition (a subsequent read operation reports 0 bytes read). It is only abnormal disconnects that select() can't report (unless you use the exceptfds parameter, but even that is not always guaranteed). The best way to handle abnormal disconnects is to simply use timeouts in your own code. If you don't receive data from the client for awhile, just close the connection. The client will have to send data periodically, such as a small heartbeat command, if it wants to stay connected.
Also, when using OpenSSL, if you are using the older ssl_... API functions (ssl_new(), ssl_set_fd(), ssl_read(), ssl_write(), etc), make sure you are NOT just blindly calling select() whenever you want, that you call it ONLY when OpenSSL tells you to (when an SSL read/write operation reports an SSL_ERROR_WANT_(READ|WRITE) error). This is an area where alot of OpenSSL newbies tend to make the same mistake. They try to use OpenSSL on top of pre-existing socket logic that waits for a readable notification before then reading data. This is the wrong way to use the ssl_... API. You are expected to ask OpenSSL to perform a read/write operation unconditionally, and then if it needs to wait for new data to arrive, or pending data to send out, it will tell you and you can then call select() accordingly before retrying the SSL read/write operation again.
On the other hand, if you are using the newer bio_... API functions (bio_new(), bio_read(), bio_write(), etc), you can take control of the underlying socket I/O and not let OpenSSL manage it for you, thus you can do whatever you want with select() (or any other socket API you want).
As a workaround, I implemented a "stop code" that the client writes to the server just before closing the connection, and this write causes the select() to break and the server reads the "stop code" and knows to close the socket.
That is a very common approach in many Internet protocols, regardless of whether SSL is used or not. It is a very distinct and explicit way for the client to say "I'm done" and both parties can then close their respective sockets.
The only problem with this solution is the "stop code" is an arbitrary string of bytes that could potentially appear in regular traffic, as the normal data being written can contain random strings that could potentially contain the "stop code".
Then either your communication protocol is not designed properly, or your code is not processing the protocol correctly. In a properly-designed and correctly-processed protocol, there will not be any such ambiguity. There needs to be a clear distinction between the various commands that your protocol defines. Your "stop code" would be one such command amongst other commands. Random data in one command should not be mistakenly treated as a different command. If you are experiencing that problem, you need to fix it.
Okay I'm brand new to socket programming and my program is not behaving like I'd expect it to. In all the examples that I see of socket programming they use accept() and all the code after assumes that a connection has been made.
But my accept() is called as soon as I start the server. Is this supposed to happen? Or is the server supposed to wait for a connection before executing the rest of the program?
EDIT: Oops I forgot to mention it is a TCP connection.
I think this is what you're after.
http://www.sockets.com/winsock.htm#Accept
The main concept within winsocket programming is you're working with either blocking or non blocking sockets. Most of the time if you're using blocking sockets you can query the sockets recieve set to see if any call would result in your call to the routine being blocked..
For starting off with this UDP is easier considering its a datagram protocol. TCP on the other hand is a streaming protocol. So it's easier to think in regards to blocks of data that is sent and received.
For a server, you:
Create the socket - socket().
Bind it to an address.
You enter a loop in which you:
Listen for connection attempts
Accept and process them
It is not clear from your description whether you are doing all those steps.
There are multiple options for the 'process them' phase, depending on whether you plan to have a single-threaded single process handle one request before processing the next, or whether you plan to have a multi-threaded single process, with one thread accepting requests and creating other threads to do the processing (while the one thread waits for the next incoming connection), or whether you plan to have the process fork with the child processing the new request while the parent goes back to listening for the next request.
You are supposed to enter your acceptance loop after you have started listening for connections. Use select() to detect when a pending client connection is ready to be accepted, then call accept() to accept it.
Is it possible for me to accept a connection and have it die withouit my knowing, then accept another connection on the same socket number?
I've got a thread to do protocol parsing and response creation. I've got another thread to handle all my network IO and one more thread to handle new incomcing connection requests. That makes three threads total. Using select in the IO thread, I get a failure and have to search for the dead socket. I am afraid there is the case that accept might want to accept a new connection on a socket number that was previous dead.
I'd assume this can't happen until I "shutdown() || close();" the socket that may be dead on the server side. If it could happen, is the only solution to setup mutexes to halt everything while I sort out what sockets have gone bonkers?
Thanks,
Chenz
A socket descriptor wont get reused until you close it.
Assuming we're talking TCP, then if the remote side closes its send side of the connection then you'll get a recv() returning 0 bytes to tell you of this. Since TCP support half closed connections you could still be able to send data to the remote side of the connection (if your application level protocol is made that way) or you might take the fact that the remote side has closed its send side as an indication that you should do the same.
You use shutdown() to close either your send side or your recv side or both sides of the connection. You use close() to close the socket and release the handle/descriptor for reuse.
So, in answer to your question. No, you wont be able to accept another connection with the same socket descriptor until you call close() on the descriptor that you already have.
You MAY accept a connection on a new socket descriptor; but that's probably not a problem for you.