upgrade server executable without losing user's connections - c

I need to develop a mechanism to upgrade a running daemon in production environment to a new version without losing client's (TCP) connections. Something similar to what nginx does when you upgrade it to a new version. I need this for bug removal or to release minor version changes, which may be once a day. The daemon is developed in C for Linux platform.
The process for the upgrade would be like this:
The new_daemon would be ran from the command line specifying the process id of the old_daemon
The new_daemon would connect via socket to the old daemon to send/receive data and mesages.
The new_daemon would send the old_daemon a message to stop listening on the PORT which is used to receive client's connections. After confirming the detention of the listening service, the new_daemon would start listening on PORT
The new_daemon would send the message to old_daemon to send currently open file descriptors of the user's connections. Using the system call sendmsg() the old_daemon would pass the new_daemon all resources it has allocated with the kernel, not only the connections, but also all open files.
The new_daemon would send the message to old_daemon to pass all global memory variables and the old_daemon would send it over the socket connection between both processes.
This process is very complex, so I would like to ask if someone can suggest a better process or maybe there is some methodology to do this easily? The goal is to have the least downtime during the upgrade process.
TIA

Another alternative is to force the old_daemon to fork()/exec() the new_daemon and immediately stop accepting. The new_daemon would inherit the listening socket, existing connections, and open files (unless they are fcntl'd to FD_CLOEXEC) automagically.
That said, I don't think there is a clean way to hand over incomplete jobs (as I understand steps 4 and 5 try to accomplish). If possible, let the old_daemon complete them.

One alternative is to write most of your demon as a shared library and use dlopen to link the new functions into the running process. This means some parts can't be changed and you might have concurrency issues but it removes the need for IPC.

Related

Handling multiple ptys from a single process without direct protocol I/O

I am considering writing a BBS-like program in C and thinking about exactly how the I/O architecture would work with such a program. I'm familiar with sockets programming already, more specifically the master/remote model (not sure if there's a more official name for it) where a master process running as a daemon runs the vast majority of the application in a main process. When remote TTYs connect, they do so in a separate process that communicates with the main process via a Unix domain socket, and there's a thread on the main process for each remote TTY's I/O. All the modules and functionality are running in the main process.
This works well for things like CLIs for some kind of process, but I don't think it's as well suited for a significantly richer/more interactive program, where I think it'd make much more sense for all the TTYs to be managed in the same process rather than communicating over a socket. For example, you can't run ncurses over a socket, since the termios that we care about is in that remote process, not in our main process or usable over the socket. So taking the master/remote model further, you'd need to move a lot of logic from the main program to the remote processes.
The problem I'm a little stuck on is exactly how you can have the main process handling all the TTYs without itself handling all of the network socket traffic. For example, say we want to allow telnet and SSH connections. With the master/remote model, it might look like this:
Telnet:
Inbound telnet connection
Telnet server launches /usr/sbin/remote_process (custom login shell)
remote_process (a C program, shell script, etc.) begins executing, communicating with main_process
SSH:
Inbound SSH connection
Authentication
SSH server launches /usr/sbin/remote_process (custom login shell)
remote_process (a C program, shell script, etc.) begins executing, communicating with main_process
Importantly, with the master/remote model we consider above, the telnet/SSH protocol is abstracted away from the program in question. It doesn't care if the incoming connection is from Telnet, SSH, a serial port, etc. We don't need to handle the details of these protocols ourselves.
Naively trying to apply this to the single-process model, handling all the TTYs directly, I would think the thing to do would be that step # 3/4 somehow needs to have the main process take over its terminal/PTY. main_process can't be called directly though, since it's already running, and I'm not sure if anything like that would be possible since somehow it would be moving the master/slave for the pty between processes, but the goal would be to have main_process doing everything remote_process was doing in the other model, directly handling the I/O from the Telnet server, SSH server, etc.
The standard way of doing this kind of thing seems to be having the main_process directly run its own listeners - that is, instead of listening for UNIX domain socket connections, directly accept Telnet/SSH traffic, etc. But then, the program is now responsible for handling the details of each individual protocol.
You can see an example of this with SyncrhonetBBS: https://github.com/SynchronetBBS/sbbs/tree/b35365c2e470bde58838cbb7445fe7e8c4bc1beb/src/syncterm
The BBS program itself has code to handle each supported protocol: SSH, TELNET, TELNETS, etc.
(I suppose there is a third model: have the main daemon process itself be quite minimal in what it does, and just have each individual TTY process contain the bulk of all the logic, and just use the daemon process for IPC between the TTYs... but then that gets tricky if you want to do stuff like dynamically loadable and unloadable modules that are really at a "system" level as opposed to per-TTY... so I'm not really considering this other extreme).
Is there any way to have the best of both worlds - be able to control all the different TTYs from a single process, but without having to directly implement protocol-specific handling? And if so, how does the TTY setup occur? I'm not looking for code examples here so much as a general high-level explanation/guidance of what this would likely look and how the different components - processes, sockets, TTYs - would interact.

Can multiple threads of a multithreaded application open sockets to the same server?

I have a load test application that I want to have start multiple threads and each one of those threads will open up a socket to the same server and communicate with it. Is this possible or must I fork() or run multiple instances of a single threaded app?
[Update from comments:]
The problem I seem to be getting is that the multiple calls to socket() all seem to be returning a value of 0. Therefore, when the threads try to communicate with the server, only one of them succeeds while the rest are waiting for a response and time out.
Sure! The only time this would be a problem is if they were all acting as a server and trying to listen on the same port. Sounds like you're using them as clients and in this regard, you can have as many as you want (as long as the OS doesn't run out of file descriptors for your process).
Yes, you can create multiple client socket connections to the same server IP/Port, as long as you are not binding those client sockets to the same local IP/Port at the same time. By default, connect() does an implicit bind() to a random local port unless bind() was explicitly called beforehand.

Forwarding an established TCP connection to another process on another port?

On a Linux machine, you have a daemon that listens on TCP port A. However, it is usually stopped because it is rarely used and takes away a large amount of system resources. Instead, I want to do something like this:
Code an application that listens on port B and does the following as soon as a connection is established: If the daemon is stopped, start it and wait until it listens on port A. Now the difficult part: Connect the client to the daemon in a completely transparent way, i.e. without the client having to reconnect on port A. Also, but this is irrelevant for this question, the application will shut down the daemon when there are no connections for a certain amount of time.
Of course, I could have my application connect to the daemon and pipe all communication. I do not want that. I want some way to forward the established connection to the daemon and then get rid of the connected socket, while the client is now happily connected with the daemon. In some way, I want to give the daemon's process my already connected socket. Is there any way to do something like this?
I'm running Debian, if that's important. I would want to code the application in C/C++, and it's okay to have OS-specific solutions (i.e. use syscalls). Forgive me though, I am not much of a Linux coder, so I am not very familiar with Linux system programming. If there is some obvious way to do it, I simply didn't know.
Of course, I am open for any kind of suggestion.
This problem has a pre-existing standard solution, generically known as inetd. It has been around for a long time, first in Unix systems and then Linux.
The more modern implementation is xinetd

Designing using fork() and TCP connection in C

I have a question regarding on how to design the following system:
My system is built of several clients listening to an environment. When a audio threshold is breached they send their information to a server, that has children listening on each connection. The server needs information from all the clients to make the necessary calculations.
Currently the server is working in UNIX and has forked out connections. They are working independently.
What I want to do is to tell the parent (in the server) that information has been sent and it's now time to process it. How should I do it?
I'm thinking of possible different ways to do it:
Using signal()in Unix to somehow tell the parent that something has happened
Convert to Threads and use some wait and notify functions
The signaling is preferable but I cannot figure out how to do it efficiently. Because the following can happen in my system:
If all the clients successfully sent information to their children of the server, how can I tell the parent that I'm ready in a efficient way? Don't know/I'm uncertain of how it will process them.
The server may not receive information from all clients. So the parent must wait for awhile for all the children but not too long. So I'm guessing some sort of timer?
Doen't use fork, and don't use signals. Use a thread pool.
What about a Unix Domain Socket for an inter-processes communication between children and father?
http://en.wikipedia.org/wiki/Unix_domain_socket
As soon as a child receives data through the TCP connection, the same data will be forwarded to the father process through the Unix Domain Socket and the latter process will be instantly notified

Distributed Networking Multiple Clients

I'm currently working on a distributed networking project for some networking practice and the idea is to send a file from my server to a few different clients (after breaking up the file) and the clients will find the frequency of a string and return it back.
The problem I'm running into is how to identify each client and send data to each one.
The solution I've been working on to identify each client by their port. The problem arises as to how I handle multiple connections and ports. I know I have to use send() to send the data to a port once I open a connection and etc. but I have no idea how to do this across multiple connections ( I can do this with a single client and server but not with multiple clients)
Does anyone have any suggestions from a high level standpoint? I got one suggestion from a friend who said:
Open a socket
Listen for connections
When a connection request is received, spawn a new thread to handle the connection.
The main process will go back to step 2 to listen for new connections, while the new thread
will handle all data flow with the associated client.
But I'm not really sure I understand this... I've also been referencing http://shoe.bocks.com/net/#socket
Thanks
Your friend is correct. Follow first three steps (mentioned by him) and then you need to:
After spawning thread, send data (read from file) to new socket.
Once entire file is finished, you should disconnect and exit thread. On client side, you should handle disconnect and probably exit.
NOTES:
Also, you can use sendfile() instead of send() if you wish. You can use select() if you wish to handle all connections without spawning threads.
Refer http://beej.us/guide/bgnet/ for details.
EDIT:
how to identify each client? Ans: This is classical port discovery problem but in your case its simple. Server should be listening on well known port (say 12345) and all the clients will connect to it. Once they are connected, server has all sockfds. You need to use these sockfds to send data and identify them.
If you check out networkComms.net, an open source network communication library, once you have created a connection with a client you can keep track of that specific client by looking at it's NetworkIdentifier tag, a guid unique to each client.
If you will be sending large files to all of your clients also check out the included DistributedFileSystem which is specifically designed for that purpose.

Resources