Hi all I am new to C so sorry if I am very lost. I am having trouble with this multi-threaded web server I am trying to create. I am attempting to...
have a thread create a new thread
have that new thread execute execvp() to call a different C program on my machine
have that new thread return streams of data from the execvp()
I was thinking about using pthreads to spawn a new process to run execvp() and have it return the data through a pipe. But is that even necessary? Don't pthreads share memory?
Also, I was maybe thinking about using fork() instead of a pthread and have the child send data back to the parent through a pipe.
Can you please help guide me in the correct direction.
What you're looking for is a combination of fork(), one of the exec functions, and pipe() (or maybe socketpair() or something, but pipes work too).
Threads share memory, but execvp() would create a completely new process replacing the caller process -- and even if this process shared memory with its parent (which I'm not sure it does!), the newly run program wouldn't know how to use that memory.
The proper way is to open a pipe when you still have one process, fork() into two processes (parent and child), and have the child call execvp(). The child can now write into its end of the pipe, and the parent can read from the other end.
Remember to wait() for the child to end.
Have you written your non-blocking, single-threaded web-server yet? How would you expect to measure the benefits of multithreading if you don't have something to compare it against? It's far easier to determine where the best performance gains are if you expose a single-threaded project to concurrency, than it is to guess and suffer with a poor framework for the rest of the project's life.
Creating threads is easy, but you really need to read the pthread_create manual first. How else can you trust that your project is handling errors correctly? I also suggest reading about the other pthread functionality. I'm happy to help you resolve issues if you show me that you're trying to resolve them yourself, by the way. I won't bother spoonfeeding you.
As mentioned by aaaaaa123456789, you wouldn't want to spawn using pthread_create/execvp as this would replace your entire program environment (including all of your threads) with the new process.
Related
This may seem to be a dumb question but I don't really have a good understanding of fork() other than knowing that this is about multi-threading. Child process is like a thread. If a task needs to be processed via fork(), how to correctly assign tasks to parent process and child process?
Check the return value of fork. The child process will receive the value of 0. The parent will receive the value of the process id of the child.
Read Advanced Linux Programming which has an entire chapter dedicated to processes (because fork is difficult to explain);
then read documentation of fork(2); fork is not about multi-threading, but about creating processes. Threads are generally created with pthread_create(3) (which is implemented above clone(2), a Linux specific syscall). Read some pthreads tutorial to learn more about threads.
PS. fork is difficult to understand (you'll need hours of reading, some experimentation, perhaps using strace(1), till you reach the "AhAh" insight moment when you have understood it) since it returns twice on success. You need to keep its result, and you need to test the result for the three cases : <0 (failure), ==0 (child), >0 (parent). Don't forget to later call waitpid(2) (or something similar) in the parent, to avoid having zombie processes.
My question is about more philosophical than technical issues.
Objective is to write a multiprocess (not multithread) program with one "master" process and N "worker" processes. Program is linux-only, async, event-based web-server, like nginx. So, the main problem is how to spawn "worker" processes.
In linux world there are two ways:
1). fork()
2). fork() + exec*() family
A short description for each way and what confused me in each of them.
First way with fork() is dirty, because forked process has copy (...on-write, i know) of parent memory: signal handlers, variables, file\socket descriptors, environ and other, e.g. stack and heap. In conclusion, after fork i need to...hmm..."clear memory", for example, disable signal handlers, socket connections and other horrible things, inherited from parent, because child has a lot of data that he was not intended - breaks encapsulation, and many side-effects is possible.
The general way for this case is run infinite loop in forked process to handle some data and do some magic with socket pair, pipes or shared memory for creating communication channel between parent and child before and after fork(), because socket descriptors reopen in child and used same socket as parent.
Also, this is nginx-way: it has one executable binary, that use fork() for spawn child process.
The second way is similar to first, but have a difference with usage one of exec*() function in child process after fork() for run external binary. One important thing is that exec*() loads binary in current (forked) process memory, automatic clear stack, heap and do all other nasty job, so fork will look like a clearly new instance of program without copy of parent memory or something other trash.
There has another problem with communication establishing between parent and child: because forked process after exec*() remove all data inherited from parent, that i need somehow create a socket pair between parent and child. For example, create additional listen socket (domain or in another port) in parent and wait child connections and child should connect to parent after initialization.
The first way is simple, but confuse me, that is not a clear process, just a copy of parent memory, with many possible side-effects and trash, and need to keep in mind that forked process has many dependencies to parent code. Second way needs more time to support two binary, and not so elegant like single-file solution. Maybe, the best way is use fork() for process create and something to clear it memory without exec*() call, but I cant find any solution for second step.
In conclusion, I need help to decide which way to use: create one-file executable file like nginx, and use fork(), or create two separate files, one with "server" and one with "worker", and use fork() + exec*(worker) N times from "server", and want know for pros and cons for each way, maybe I missed something.
For a multiprocess solution both options, fork and fork+exec, are almost equivalent and depends on the child and parent process context. If the child process executes the parents' text (binary) and needs all or a part of parents' staff (descriptors, signals etc) - it is a sign to use fork. If the child should execute a new binary and needs nothing from the parents' staff - it seems fork+exec much more suitable.
There is also a good function in the pthread library - pthread_atfork().
It allows to register handlers that will be called before and after fork.
These handlers may perform all the necessary work (closing file descriptors, for example).
As a Linux Programmer, you have a rich library of multithreading process capabilities. Look at pthread and friends.
If you need a process per request, then fork and friends have been the most widely used since time immemorial.
I'm going to write a program in which the main thread creates new thread and then the new thread creates a child process. Since I have a hard time keeping track of the new thread and forked process, I'd like to gain a wise answer from someone.
My question is
1. Does a created process in a thread start to execute codes after pthread_create?
2. If 1 is not, where does the forked process start from if a call of fork in a thread occurs?
Thank you for reading my question.
Some of this is a bit OS-dependent, as different systems have different POSIX thread implementations and this can expose internals.
POSIX offers pthread_atfork as a somewhat blunt instrument for dealing with some of the issues, but it still looks pretty messy to me.
If your system uses a one-to-one map between "user land thread" and "kernel thread" using clone or rfork to achieve proper user-space sharing of data between threads, then fork will merely duplicate the (single) thread that calls it. However, if your system has a many-to-many style mapping (so that one user process is handling multiple threads, at least before they enter into blocking syscalls), fork may internally duplicate multiple threads. POSIX says it should look like it only duplicated one thread, so that's not supposed to be visible, but I'm not sure how well all systems implement this.
There's some general advice at http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them (Linux-centric, obviously, but still useful).
Is there some particular reason you want to fork inside a thread but not exec? In general, if you just want to run more code in parallel, you just spin off yet another thread (i.e., once you choose to run any threads, you do everything in threads, except if you have to fork for exec; if the exec fails, just _exit).
As you might know, all threads in the application die in a forked process, other than the thread doing the fork. However, I plan to ressurrect those threads in the forked process by calling pthread_create and using pthread_attr_setstack, so as to assign the newly created threads the same stack as the dead threads. Something like as follows.
// stackAddr and stacksize taken from the dead thread
pthread_attr_setstack(&attr, stackAddr, stacksize);
rc = pthread_create(&thread, &attr, threadRoutine, NULL);
However, I would still need to get the CPU register values, such as stack pointer, base pointer, instruction pointer etc, to restart threads from the same point. How can I do that? And what else do I need to do to successfully achieve my goal?
Also note that I'm using a 64-bit architecture. What additional difficulties would it have as compared to 32-bit one?
I see two possible ways to shoot yourself in the foot and lose hair^W^W^W^W^W^W^W^Wtry to do this:
Try to force each thread into calling getcontext() before the fork(), and then restore the context of each thread via setcontext(). Probably won't work, but you can try for fun.
Save ptrace(PTRACE_GETREGS), ptrace(PTRACE_GETFPREGS), and restore with ptrace(PTRACE_SETREGS), ptrace(PTRACE_SETFPREGS).
The other threads in the current process aren't killed by a fork -- they're still there and running in the parent. The problem you seem to have is that fork only forks a SINGLE thread in the current procces, creating a new process running one thread with a copy of all non-thread resources in the parent.
What you apparently want is a way of duplicating an entire multithreaded task, forking all the threads in it and creating a new process/task with the same number of threads.
In order to do THAT, you would need to find and pause all the other threads in the process, dump their current state (including all locks they hold), fork a new process, and then (re)create each of those other threads in the child, rewiring the lock state to refer to the new child threads where needed.
Unfortunately, the POSIX pthread interface is hopelessly underspecified, and provides no way of doing that. In particular, it lacks any sort of reflective interface allowing you to figure out what threads are actually running.
If you want to try to do this anyway, I can see two ways of trying to approach this:
poke around in /proc/self/task to figure out what threads are running in your process, effectively getting that reflective interface in a highly non-portable way. You'll likely end up having to ptrace(2) the other threads to get their internal state. This will be very difficult.
wrap the pthreads library -- instead of using library directly, intercept every call and keep track of all the threads/mutexes/locks that get created, so that you have that information available when you want to fork. This will work fine as long as you don't want to use any third-party libraries that use pthreads
The second option is much easier (and somewhat portable), but only works well if you have access to all the source code of your entire application, and can modify it to use your wrappers properly.
Just googling around I found that solaris has a forkall() call that does exactly what you want, see the documentation here:
http://download.oracle.com/docs/cd/E19963-01/html/821-1601/gen-1.html
I assume you're running on linux, but it is possible to run solaris on x86 hardware. So maybe that is an option for you.
Just wondering how if it's possible to execute another program in a thread and send information to/get information from it. Essentially the same concept as with a child process and using pipes to communicate - however I don't want to use fork.
I can't seem to find whether it's possible to do this, any help would be appreciated.
Thanks
You cannot use the exec family of functions to load another executable file within a thread; the exec functions replace the entire process with the process started from the executable. Thus fork() is necessary if you want your original process to keep running.
In theory you could replicate most of the behaviour of the exec system call in userspace, and run an executable within a thread - but as the thread would share the open file table, signal handlers and so on with the rest of the process, it would likely destructively interfere with the main process. It would also be a lot of work.
If you're not using fork (directly or indirectly), then it's not really another process. Of course, you can communicate between threads within a process. That's essential to most multithreading.