Is a server an infinite loop running as a background process? - c

Is a server essentially a background process running an infinite loop listening on a port? For example:
while(1){
command = read(127.0.0.1:xxxx);
if(command){
execute(command);
}
}
When I say server, I obviously am not referring to a physical server (computer). I am referring to a MySQL server, or Apache, etc.
Full disclosure - I haven't had time to poke through any source code. Actual code examples would be great!

That's more or less what server software generally does.
Usually it gets more complicated because the infinite loop "only" accepts the connection and each connection can often handle multiple "commands" (or whatever they are called in the used protocol), but the basic idea is roughly this.

There are three kinds of 'servers' - forking, threading and single threaded (non-blocking). All of them generally loop the way you show, the difference is what happens when there is something to be serviced.
A forking service is just that. For every request, fork() is invoked creating a new child process that handles the request, then exits (or remains alive, to handle subsequent requests, depending on the design).
A threading service is like a forking service, but instead of a whole new process, a new thread is created to serve the request. Like forks, sometimes threads stay around to handle subsequent requests. The difference in performance and footprint is simply the difference of threads vs forks. Depending on the memory usage that is not servicing a client (and prone to changing), its usually better to not clone the entire address space. The only added complexity here is synchronization.
A single process (aka single threaded) server will fork only once to daemonize. It will not spawn new threads, it will not spawn child processes. It will continue to poll() the socket to find out when the file descriptor is ready to receive data, or has data available to be processed. Data for each connection is kept in its own structure, identified by various states (writing, waiting for ACK, reading, closing, etc). This can be an extremely efficient design, if done properly. Instead of having multiple children or threads blocking while waiting to do work, you have a single process and event loop servicing requests as they are ready.
There are instances where single threaded services spawn multiple threads, however the additional threads aren't working on servicing incoming requests, one might (for instance) set up a local socket in a thread that allows an administrator to obtain a status of all connections.
A little googling for non blocking http server will yield some interesting hand rolled web servers written as code golf challenges.
In short, the difference is what happens once the endless loop is entered, not just the endless loop :)

In a matter of speaking, yes. A server is simply something that "loops forever" and serves. However, typically you'll find that "daemons" do things like open STDOUT and STDERR onto file handles or /dev/null along with double forks among other things. Your code is a very simplistic "server" in a sense.

Related

How to synchronize read and multiple writes on sockets

I am asking some theoretical questions, since it would be really hard for me to post the code of the project involved, which is composed of too many files.
I am writing code for a server program, which has to communicate with several clients that send a variety of different requests and expect answers for each of them.
The server is multi-threaded, so every single thread mutually accesses a client connections list and performs all the operations within the request.
The two parts are communicating viaAF_UNIX sockets and for mutual exclusion I have used locks and condition variables.
Now my issue is this: with certain interleavings of execution, the server side ends up making two simultaneous writes to the client (actually two worker threads within the server both send a message to the same client), which is expecting just one. For unlucky interleavings, the client only gets to read one of the messages sent by the server, but I have noticed that sometimes this doesn't happen and everything works fine, this is because the two different requests to the client happen to be distanced in time.
The issue happens even when one server thread does a write before the client calls read, and in between those two events, another server thread calls another write to the same client. In this case, only the most recent write is received by the client.
From what I have understood regarding blocking mode (which is what I am using)read() and write() should block when no one's receiving from the other side. Now I don't understand why the second write from the server worker gets completely lost? Shouldn't it block if no one's receiving and then resume when the client calls read?
Should I usemutual exclusion on the socket so that the second write should wait for the previous one to be completely finished?
I hope that my issue is clear even though I am not showing any code, if necessary please tell me and I will try to post some pieces of code. I think that the issue might just be a conceptual thing, regarding my understood of read(), write() and mutual exclusion, but I understand that the issue might be somewhere else and that without code it would be hard to figure out! Thank you!

c linux multithreading networking

I have a network application on a gateway. It receives and sends packets. For most of them, my gateway acts as a router, but in some cases, it can receive packets too.
Should I have:
only one main thread
a main thread + a dispatch thread in charge of giving it to the correct flow handler
as many threads as there are flows
something else.
?
Doing multithreading correctly is no simple matter, in many cases a select and friends based solution will be a whole lot easier to create.
Your case sounds a lot like a typical Unix service daemon. The popular solution to your problem is not to use threads, but forks.
The idea is that your program listens on the socket and waits for connections. As soon as a connection arrives, it forks. The child process then continues to process the connection. The father process itself just continues in the loop and waits for incoming connections.
Advantages over threading:
Very simple program design
No problems with concurrency
Established method for Unix/Linux systems
Disadvantages:
Things get complicated when several connections interact with each other (your use case doesn't sound like they would)
Performance penalty on Windows systems (not on Unix systems!)
You can find many code examples online.
I don't know much about networking applications, but I think it's like this:
If you have the ability to react asynchronous to the requests you would probably use just one single thread (like in Node.JS). If you won't be able to react asynchronous the main thread would always block the other actions.
If you are not able to react asynchronous on your requests you have to use more than one thread. But you could achieve that in many different ways: you could create for every request a thread, or a limited number of threads and assign them then to your requests.
My personal preference is use one main thread and one worker thread per connection. No cap whatsoever. I am assuming that your server will be stateless like a HTTP server.
For stateful servers you will have to figure out some way to control number of threads.

Is there any benefit to using epoll with a very small number of file descriptors?

Would the following single threaded UDP client application see a performance benefit from using epoll over simply calling recvfrom/sendto on non-blocking sockets?
Let me explain the client.
I am writing single threaded UDP based client (custom protocol) that both sends and receives data using non-blocking I/O and my colleague suggested I use epoll for this.
The client sends and receives multiple packets of information that are all associated with a unique session id and multiple sessions can be run simultaneously.
If I use epoll, there will be a limited number of maybe 10-20 file descriptors which epoll_wait could wait on. Each file descriptor would be associated with one session. So that's maximum 10 - 20 sessions and this number will be enforced.
Each session has it's own state machine. From a single thread I need to run each state machine reasonably frequently and poll the associated socket as well.
In my case, I'd have to use epoll_wait with a timeout of zero or some very small value so that I can give CPU time to run the state machines for each session.
If there is data for a session then it needs to be directed to the associated state machine.
However, I can't really see much benefit of this design with such a small number of file descriptors.
The way I see it is I have two design options:
1. In my main loop using epoll I can poll the descriptors using epoll_wait with either a small timeout or no timeout.
How it handles data at this point is where I'm getting a bit stuck... either I read it right away and then throw it into a queue for each state machine to pick up when it's run, or I set a flag on the state machine to tell it that data is waiting and when the state machine runs it'll pick it up with a call to recvfrom. Or, I read the data and handle it right away and run the state machine for it.
Or...
2, Just run each state machine from the main loop and call recvfrom. If I get some data, handle it. If I don't then do whatever else the state machine requires. Is there huge overhead calling recvfrom when there is no data?
With going the epoll route I'm coding in some extra complexity. If there is a strong likelyhood for it be faster in my case then I will start doing it. However, if the second way which really simple works just as well then I would not use epoll.
Any thoughts?
No, and in fact performance will be much worse using epoll if adding and removing file descriptors from the set to poll is anything but an extremely rare event. With poll, a single syscall performs the entire operation. With epoll, you need multiple syscalls to modify the set and then wait on it.
Unless you're writing a server that's intended to scale to tens, hundreds, or thousands of thousands of long-term persistent connections, epoll is not only premature optimization, but actually a pessimization. It's also completely nonstandard and non-portable.

Is it okay to create threads instead of forking process in server in socket connections?

I am making a simple program which suppose to accept txt file data from clients. (not very big files..)
I was wondering, if it is okay to create the threads using pthread instead of creating another process using fork.
My plan is to create threads for every connection that I get to process the data and store it in a shared queue, which I can use mutex or semaphore to protect any race condition.
Yes, it is perfectly fine to use separate threads to process client connections. It will be faster compared to forking new processes. There is a drawback though in terms of isolation: because all clients are handled within the same process if this process goes down the server will not be able to serve other clients.
As long as you keep in mind the limitations of threads, it's fine. In particular, you're going to need appropriate locking, and you'll have relatively small stacks to work with.
You might also want to consider an event driven architecture using eg libevent; this avoids the locking and stack size issues, and reduces context switch overhead, but can result in more complex code.

Cleanest way to stop a process on Win32?

While implementing an applicative server and its client-side libraries in C++, I am having trouble finding a clean and reliable way to stop client processes on server shutdown on Windows.
Assuming the server and its clients run under the same user, the requirements are:
the solution should work in the following cases:
clients may each feature either a console or a gui.
user may be unprivileged.
clients may be or become unresponsive (infinite loop, deadlock).
clients may or may not be children of the server (direct or indirect).
unless prevented by a client-side defect, clients shall be allowed the opportunity to exit cleanly (free their ressources, sync some data to disk...) and some reasonable time to do so.
all client return codes shall be made available (if possible) to the server during the shutdown procedure.
server shall wait until all clients are gone.
As of this edit, the majority of the answers below advocate the use of a shared memory (or another IPC mechanism) between the server and its clients to convey shutdown orders and client status. These solutions would work, but require that clients successfully initialize the library.
What I did not say, is that the server is also used to start the clients and in some cases other programs/scripts which don't use the client library at all. A solution that did not rely on a graceful communication between server and clients would be nicer (if possible).
Some time ago, I stumbled upon a C snippet (in the MSDN I believe) that did the following:
start a thread via CreateRemoteThread in the process to shutdown.
had that thread directly call ExitProcess.
Unfortunately now that I'm looking for it, I'm unable to find it and the search results seem to imply that this trick does not work anymore on Vista. Any expert input on this ?
If you use thread, a simple solution is to use a named system event, the thread sleeps on the event waiting for it to be signaled, the control application can signal the event when it wants the client applications to quit.
For the UI application it (the thread) can post a message to the main window, WM_ CLOSE or QUIT I forget which, in the console application it can issue a CTRL-C or if the main console code loops it can check some exit condition set by the thread.
Either way rather than finding the client applications an telling them to quit, use the OS to signal they should quit. The sleeping thread will use virtually no CPU footprint provided it uses WaitForSingleObject to sleep on.
You want some sort of IPC between clients and servers. If all clients were children, I think pipes would have been easiest; since they're not, I guess a server-operated shared-memory segment can be used to register clients, issue the shutdown command, and collect return codes posted there by clients successfully shutting down.
In this shared-memory area, clients put their process IDs, so that the server can forcefully kill any unresponsive clients (modulo server privileges), using TerminateProcess().
If you are willing to go the IPC route, make the normal communication between client and server bi-directional to let the server ask the clients to shut down. Or, failing that, have the clients poll. Or as the last resort, the clients should be instructed to exit when the make a request to server. You can let the library user register an exit callback, but the best way I know of is to simply call "exit" in the client library when the client is told to shut down. If the client gets stuck in shutdown code, the server needs to be able to work around it by ignoring that client's data structures and connection.
Use PostMessage or a named event.
Re: PostMessage -- applications other than GUIs, as well as threads other than the GUI thread, can have message loops and it's very useful for stuff like this. (In fact COM uses message loops under the hood.) I've done it before with ATL but am a little rusty with that.
If you want to be robust to malicious attacks from "bad" processes, include a private key shared by client/server as one of the parameters in the message.
The named event approach is probably simpler; use CreateEvent with a name that is a secret shared by the client/server, and have the appropriate app check the status of the event (e.g. WaitForSingleObject with a timeout of 0) within its main loop to determine whether to shut down.
That's a very general question, and there are some inconsistencies.
While it is a not 100% rule, most console applications run to completion, whereas GUI applications run until the user terminates them (And services run until stopped via the SCM). Hence, it's easier to request a GUI to close. You send them the equivalent of Alt-F4. But for a console program, you have to send them the equivalent of Ctrl-C and hope they handle it. In both cases, you simply wait. If the process sticks around, you then shoot it down (TerminateProcess) and pray that the damage is limited. But your HDD can fill up with temporary files.
GUI application in general do not have exit codes - where would they go? And a console process that is forcefully terminated by definition does not exit, so it has no exit code. So, in a server shutdown scenario, don't expect exit codes.
If you've got a debugger attached, you generally can't shutdown the process from another application. That would make it impossible for debuggers to debug exit code!

Resources