Regarding SIgnals in C - c

I understand with the help of Signals, we can pass interrupts to the executing C programs, and direct them to behave according to the Handlers assigned. When ctrl+c is pressed, SIGINT is executed.
Currently I am running a setup, where in I have 2 systems. Both have servers(DiscoveryServers) capable of multicast extension(mDNS). I have made use of Signals like SIGHUP and SIGKILL.
I am trying to understand what SIGKILL and SIGHUP actually does here.
So in one of the systems, I have an extra server which registers to the local discovery server. This server is advertised throughout the network with the help of mDNS.
Now, through SIGHUP, i can see that the terminal close of the server is detected amd the corresponding handler is executed.
But when I shut the system down, both the server and the Discovery server of that system, should go off. But that is detected inconsistently( not always) through SIGHUP. I tried with SIGKILL and still the response is the same. It is very unclear as to why it is happening?
Is it because the mDNS is using UDP and the UDP is unreliable?
Eg: Consider A and B as two systemss. Both A and B have discoveryservers running (capable of mDNS: meaning they can advertise the server records throughout the network) which contains the list of servers running on their system respectively. An extra server (server E) running on A, registers to discovery server of A. Now the registered server records are also advertised. Because of mDNS, the discovery server in B also updates its cache with all the advertiesments from system A. B can be seen as a client, runs a particular API, to get the list of servers running on the netowrk. All of this works fine.
If i close the terminal of the extra server on purpose, SIGHUP handler works smoothly. I can see from logs of both the discovery servers that the extra server has been removed.
Now if i shutdown the system A unexpectedly, the terminals running the server applications should close. That can be achieved with SIGHUP handler. What i observe from the log of system B is that, sometimes there are clean removal of the servers, sometimes partial.
There is no clarity as to why it is happening.

Related

Handling multiple ptys from a single process without direct protocol I/O

I am considering writing a BBS-like program in C and thinking about exactly how the I/O architecture would work with such a program. I'm familiar with sockets programming already, more specifically the master/remote model (not sure if there's a more official name for it) where a master process running as a daemon runs the vast majority of the application in a main process. When remote TTYs connect, they do so in a separate process that communicates with the main process via a Unix domain socket, and there's a thread on the main process for each remote TTY's I/O. All the modules and functionality are running in the main process.
This works well for things like CLIs for some kind of process, but I don't think it's as well suited for a significantly richer/more interactive program, where I think it'd make much more sense for all the TTYs to be managed in the same process rather than communicating over a socket. For example, you can't run ncurses over a socket, since the termios that we care about is in that remote process, not in our main process or usable over the socket. So taking the master/remote model further, you'd need to move a lot of logic from the main program to the remote processes.
The problem I'm a little stuck on is exactly how you can have the main process handling all the TTYs without itself handling all of the network socket traffic. For example, say we want to allow telnet and SSH connections. With the master/remote model, it might look like this:
Telnet:
Inbound telnet connection
Telnet server launches /usr/sbin/remote_process (custom login shell)
remote_process (a C program, shell script, etc.) begins executing, communicating with main_process
SSH:
Inbound SSH connection
Authentication
SSH server launches /usr/sbin/remote_process (custom login shell)
remote_process (a C program, shell script, etc.) begins executing, communicating with main_process
Importantly, with the master/remote model we consider above, the telnet/SSH protocol is abstracted away from the program in question. It doesn't care if the incoming connection is from Telnet, SSH, a serial port, etc. We don't need to handle the details of these protocols ourselves.
Naively trying to apply this to the single-process model, handling all the TTYs directly, I would think the thing to do would be that step # 3/4 somehow needs to have the main process take over its terminal/PTY. main_process can't be called directly though, since it's already running, and I'm not sure if anything like that would be possible since somehow it would be moving the master/slave for the pty between processes, but the goal would be to have main_process doing everything remote_process was doing in the other model, directly handling the I/O from the Telnet server, SSH server, etc.
The standard way of doing this kind of thing seems to be having the main_process directly run its own listeners - that is, instead of listening for UNIX domain socket connections, directly accept Telnet/SSH traffic, etc. But then, the program is now responsible for handling the details of each individual protocol.
You can see an example of this with SyncrhonetBBS: https://github.com/SynchronetBBS/sbbs/tree/b35365c2e470bde58838cbb7445fe7e8c4bc1beb/src/syncterm
The BBS program itself has code to handle each supported protocol: SSH, TELNET, TELNETS, etc.
(I suppose there is a third model: have the main daemon process itself be quite minimal in what it does, and just have each individual TTY process contain the bulk of all the logic, and just use the daemon process for IPC between the TTYs... but then that gets tricky if you want to do stuff like dynamically loadable and unloadable modules that are really at a "system" level as opposed to per-TTY... so I'm not really considering this other extreme).
Is there any way to have the best of both worlds - be able to control all the different TTYs from a single process, but without having to directly implement protocol-specific handling? And if so, how does the TTY setup occur? I'm not looking for code examples here so much as a general high-level explanation/guidance of what this would likely look and how the different components - processes, sockets, TTYs - would interact.

VxWorks initiating connect() it fails at startup, but succeeds later

I'm using VxWorks 5.4 and attempting to connect to a server via TCP. A server which I'm going to be sending logs to, but for some reason at boot it fails or takes even up to 6 seconds - and is blocking the continuation of the task that the connection attempt was made in, which obviously is a big no no.
I have checked if the problem is one the server side by making a simple c program in windows that would connect to that server, and it takes no time at all (milliseconds).
I have "solved" the problem by making a task that would attempt "connectwithtineout" every 1-2 seconds and it does work (initiates the connection after around 2 fails in around 20ms), but I don't really like this approach and would have liked to initiate the actual connection when whatever I need that I'm missing is there and up instead of checking if I can connect every time.
After trying to investigate what the issue could have been, eventually the problem was about how a session is being closed between my system and the server.
You see, when you have a client running on some app on your windows/ or whatever other system, when you shut it down, it goes through some processes that close the session properly.
That is not the case in my system where to close it I essentially unplug the wire - thereby not having my system go through a shutdown process that involves properly closing the session.
After the system is up again, the connect function cannot be performed because my system tries to make the same session as the "dead one" which the server thinks is running.
Solving the problem was easy from the server side, just have a keepalive functionality - if your system doesn't respond for a while that you decide, close the session.

Running two socket applications within one program

I have two C/C++ socket programs, say server and client, and both communicate to each other through read and write. The entire flow works fine (i.e., communication, read, write) when I run the two programs on two separate terminals in localhost. To avoid manually starting the client program, I use system(exec_cmd_to_run_client_program) in my server program. However, doing so doesn't give me the correct result as that of two separate terminals. I do see server and client running in the job monitor, but the communication in between seems never happens. What could be the problem?
Also I tried using ssh library libssh in the server program to open a new ssh session and send execution command to run the client program. Again I see the same result as system call. Both programs showed up in the job monitor but communication never happens. Did I miss something?

Why does my Perl TCP server script hang with many TCP connections?

I've got a strange issue with a server accepting TCP connections. Even though there are normally some processes waiting, at some volume of connections it hangs.
Long version:
The server is written in Perl and binds a $srv socket with the reuse flag and listen == 5. Afterwards, it forks into 10 processes with a loop of $clt=$srv->accept(); do_processing($clt); $clt->shutdown(2);
The client written in C is also very simple - it sends some lines, then receives all lines available and does a shutdown(sockfd, 2); There's nothing async going on and at the end both send and receive queues are empty (as reported by netstat).
Connections last only ~20ms. All clients behave the same way, are the same implementation, etc. Now let's say I'm accepting X connections from client 1 and another X from client 2. Processes still report that they're idle all the time. If I add another X connections from client 3, suddenly the server processes start hanging just after accepting. The first blocking thing they do after accept(); is while (<$clt>) ... - but they don't get any data (on the first try already). Suddenly all 10 processes are in this state and do not stop waiting. On strace, the server processes seem to hang on read(), which makes sense.
There are loads of connections in TIME_WAIT state belonging to that server (~100 when the problem starts to manifest), but this might be a red herring.
What could be happening here?
After some more analysis: It turned out that the client was at fault, not closing previous connections properly before trying the next one. The servers at the beginning of the load-balancing list were left stale connections.
This probably isn't the solution to your problem, but it might solve a problem you'll experience in the future: don't forget to close() the sockets when you're done! shutdown() will disconnect the stream, but it'll still eat a file descriptor.
Since you said strace is showing processes stuck in read(), then your problem seems to be that the client isn't sending the data you expect it to be sending. You should either fix your client, or add an alarm() to your server processes so that they can survive dead clients.
Does it surge and then pause a long time (circa two minutes or so) and then surge again? If so you may not have your system max open files limit set high enough.

Cleanest way to stop a process on Win32?

While implementing an applicative server and its client-side libraries in C++, I am having trouble finding a clean and reliable way to stop client processes on server shutdown on Windows.
Assuming the server and its clients run under the same user, the requirements are:
the solution should work in the following cases:
clients may each feature either a console or a gui.
user may be unprivileged.
clients may be or become unresponsive (infinite loop, deadlock).
clients may or may not be children of the server (direct or indirect).
unless prevented by a client-side defect, clients shall be allowed the opportunity to exit cleanly (free their ressources, sync some data to disk...) and some reasonable time to do so.
all client return codes shall be made available (if possible) to the server during the shutdown procedure.
server shall wait until all clients are gone.
As of this edit, the majority of the answers below advocate the use of a shared memory (or another IPC mechanism) between the server and its clients to convey shutdown orders and client status. These solutions would work, but require that clients successfully initialize the library.
What I did not say, is that the server is also used to start the clients and in some cases other programs/scripts which don't use the client library at all. A solution that did not rely on a graceful communication between server and clients would be nicer (if possible).
Some time ago, I stumbled upon a C snippet (in the MSDN I believe) that did the following:
start a thread via CreateRemoteThread in the process to shutdown.
had that thread directly call ExitProcess.
Unfortunately now that I'm looking for it, I'm unable to find it and the search results seem to imply that this trick does not work anymore on Vista. Any expert input on this ?
If you use thread, a simple solution is to use a named system event, the thread sleeps on the event waiting for it to be signaled, the control application can signal the event when it wants the client applications to quit.
For the UI application it (the thread) can post a message to the main window, WM_ CLOSE or QUIT I forget which, in the console application it can issue a CTRL-C or if the main console code loops it can check some exit condition set by the thread.
Either way rather than finding the client applications an telling them to quit, use the OS to signal they should quit. The sleeping thread will use virtually no CPU footprint provided it uses WaitForSingleObject to sleep on.
You want some sort of IPC between clients and servers. If all clients were children, I think pipes would have been easiest; since they're not, I guess a server-operated shared-memory segment can be used to register clients, issue the shutdown command, and collect return codes posted there by clients successfully shutting down.
In this shared-memory area, clients put their process IDs, so that the server can forcefully kill any unresponsive clients (modulo server privileges), using TerminateProcess().
If you are willing to go the IPC route, make the normal communication between client and server bi-directional to let the server ask the clients to shut down. Or, failing that, have the clients poll. Or as the last resort, the clients should be instructed to exit when the make a request to server. You can let the library user register an exit callback, but the best way I know of is to simply call "exit" in the client library when the client is told to shut down. If the client gets stuck in shutdown code, the server needs to be able to work around it by ignoring that client's data structures and connection.
Use PostMessage or a named event.
Re: PostMessage -- applications other than GUIs, as well as threads other than the GUI thread, can have message loops and it's very useful for stuff like this. (In fact COM uses message loops under the hood.) I've done it before with ATL but am a little rusty with that.
If you want to be robust to malicious attacks from "bad" processes, include a private key shared by client/server as one of the parameters in the message.
The named event approach is probably simpler; use CreateEvent with a name that is a secret shared by the client/server, and have the appropriate app check the status of the event (e.g. WaitForSingleObject with a timeout of 0) within its main loop to determine whether to shut down.
That's a very general question, and there are some inconsistencies.
While it is a not 100% rule, most console applications run to completion, whereas GUI applications run until the user terminates them (And services run until stopped via the SCM). Hence, it's easier to request a GUI to close. You send them the equivalent of Alt-F4. But for a console program, you have to send them the equivalent of Ctrl-C and hope they handle it. In both cases, you simply wait. If the process sticks around, you then shoot it down (TerminateProcess) and pray that the damage is limited. But your HDD can fill up with temporary files.
GUI application in general do not have exit codes - where would they go? And a console process that is forcefully terminated by definition does not exit, so it has no exit code. So, in a server shutdown scenario, don't expect exit codes.
If you've got a debugger attached, you generally can't shutdown the process from another application. That would make it impossible for debuggers to debug exit code!

Resources