How to make processes alternating? - c

As for threads, I have mutex and conditionals so I could manipulate them easily.
However, if I create two processes by fork(), how could I make them alternating?
Or, is there any way to create a "critical section" for processes?
I intended to make a program that prints "r" and "w" alternatively, here is the code.
#include <stdio.h>
#include <stdlib.h>
int pipe_1[2];
int flag = 0;
void r();
void w();
int main() {
pipe(pipe_1);
if(fork())
r();
else
w();
}
void r() {
int count = 0;
while(1) {
printf("%d \n", flag);
if (count == 10)
exit(0);
if(flag == 0) {
puts("r");
flag = 1;
count++;
while(flag == 1)
;
}
}
}
void w() {
while(1) {
if(flag == 1) {
puts("w");
flag = 0;
while(flag == 0)
;
}
}
}
The out put is only:
0
r
Then it seems to enter a infinite loop.
What's the problem?
And what's the right way to make alternating processes?
Thanks.

This may be overwhelming, but there are TONS of primitives you could use. See here for a list.
http://beej.us/guide/bgipc/output/html/singlepage/bgipc.html
Glancing at the list, just about all of those could be used. Some are more like traditional pthread synchronization primitives, others are higher-level, but can still be used for synchronization.
For example, you could just open a TCP socket between the two and send messages when it's the other side's turn. Maybe with an incrementing number.
Something perhaps more traditional would be semaphores:
http://beej.us/guide/bgipc/output/html/singlepage/bgipc.html#semaphores
Also, this assumes a modern unix-like platform. Windows is likely very different.
It looks like you have a pipe already, so you can use that to have each side send a message to the other after it's done its print. The other side would do a blocking read, then return when the message was sent, do it's print, send a message back, and go back to a blocking read.

They are separate processes, so each has it's own flag; r changing its doesn't affect w's.

In order for two processes to communicate with each other without sharing the same address space (like threads do), they must use Inter-Process Communication means (aka IPC). Some of the IPC mechanisms are: shared memory, semaphore, pipes, sockets, message queues and more. Most of the time, IPC mechanisms are operating system specific. However, many ideas are general enough so it is possible to come up with a portable implementations, which Boost project did as part of Boost.Interprocess library. What I think you should take a look at first is Synchronization Mechanisms section. Note, however, that this is a C++ library. I am not aware of any C library that is as good as Boost.
Hope it helps. Good Luck!

Related

Use while loop to make a thread wait till the lock variable is set to avoid race condition in C prgramming

#include <stdio.h>
#include <pthread.h>
long mails = 0;
int lock = 0;
void *routine()
{
printf("Thread Start\n");
for (long i = 0; i < 100000; i++)
{
while (lock)
{
}
lock = 1;
mails++;
lock = 0;
}
printf("Thread End\n");
}
int main(int argc, int *argv[])
{
pthread_t p1, p2;
if (pthread_create(&p1, NULL, &routine, NULL) != 0)
{
return 1;
}
if (pthread_create(&p2, NULL, &routine, NULL) != 0)
{
return 2;
}
if (pthread_join(p1, NULL) != 0)
{
return 3;
}
if (pthread_join(p2, NULL) != 0)
{
return 4;
}
printf("Number of mails: %ld \n", mails);
return 0;
}
In the above code each thread runs a for loop to increase the value
of mails by 100000.
To avoid race condition is used lock variable
along with while loop.
Using while loop in routine function does not
help to avoid race condition and give correct output for mails
variable.
In C, the compiler can safely assume a (global) variable is not modified by other threads unless in few cases (eg. volatile variable, atomic accesses). This means the compiler can assume lock is not modified and while (lock) {} can be replaced with an infinite loop. In fact, this kind of loop cause an undefined behaviour since it does not have any visible effect. This means the compiler can remove it (or generate a wrong code). The compiler can also remove the lock = 1 statement since it is followed by lock = 0. The resulting code is bogus. Note that even if the compiler would generate a correct code, some processor (eg. AFAIK ARM and PowerPC) can reorder instructions resulting in a bogus behaviour.
To make sure accesses between multiple threads are correct, you need at least atomic accesses on lock. The atomic access should be combined with proper memory barriers for relaxed atomic accesses. The thing is while (lock) {} will result in a spin lock. Spin locks are known to be a pretty bad solution in many cases unless you really know what you are doing and all the consequence (in doubt, don't use them).
Generally, it is better to uses mutexes, semaphores and wait conditions in this case. Mutexes are generally implemented using an atomic boolean flag internally (with right memory barriers so you do not need to care about that). When the flag is mark as locked, an OS sleeping function is called. The sleeping function wake up when the lock has been released by another thread. This is possible since the thread releasing a lock can send a wake up signal. For more information about this, please read this. In old C, you can use pthread for that. Since C11, you can do that directly using this standard API. For pthread, it is here (do not forget the initialization).
If you really want a spinlock, you need something like:
#include <stdatomic.h>
atomic_flag lock = ATOMIC_FLAG_INIT;
void *routine()
{
printf("Thread Start\n");
for (long i = 0; i < 100000; i++)
{
while (atomic_flag_test_and_set(&lock)) {}
mails++;
atomic_flag_clear(&lock);
}
printf("Thread End\n");
}
However, since you are already using pthreads, you're better off using a pthread_mutex
Jérôme Richard told you about ways in which the compiler could optimize the sense out of your code, but even if you turned all the optimizations off, you still would be left with a race condition. You wrote
while (lock) { }
lock=1;
...critical section...
lock=0;
The problem with that is, suppose lock==0. Two threads racing toward that critical section at the same time could both test lock, and they could both find that lock==0. Then they both would set lock=1, and they both would enter the critical section...
...at the same time.
In order to implement a spin lock,* you need some way for one thread to prevent other threads from accessing the lock variable in between when the first thread tests it, and when the first thread sets it. You need an atomic (i.e., indivisible) "test and set" operation.
Most computer architectures have some kind of specialized op-code that does what you want. It has names like "test and set," "compare and exchange," "load-linked and store-conditional," etc. Chris Dodd's answer shows you how to use a standard C library function that does the right thing on whatever CPU you happen to be using...
...But don't forget what Jérôme said.*
* Jérôme told you that spin locks are a bad idea.

Undeterministic C Behavior?"Fork bomb"

So I created a "fork" bomb per say. However when I run it on my computer it kills everything on my computer, goes to black screen then restores itself.
On my friends computer when running the same exact code, his actually does a fork bomb but never makes it to the kill loop.
Any reason why?
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
int main(){
int pc = 0;
int* pids = calloc(1025,sizeof(int));
L1:
while(1){
int pid = fork();
if(pid != 0)
{
pc++;
pids[pc] = pid;
}
if(pc == 1024)
{
goto L2;
break;
}
}
L2:
while(1){
if(pids[pc] != 0) {
kill(pids[pc],SIGKILL);
}
if(pc == 0)
{
goto L1;
break;
}
pc--;
}
free(pids);
}
Note this code is just for funsies.
Update:
putting pc++. outside of the if statement caused a kernel panic. Could someone explain to me why?
In theory this code doesn't even work.
The reason you're probably crashing is that it's possible for fork() to fail, in which case it will return -1. When you call kill(-1, SIGKILL), it sends SIGKILL to every process on your system. If you're running as a privileged user, the reason this is terrible should be obvious.
Side notes:
The return type of fork() is pid_t, not int. In most cases, pid_t happens to fit in an int, but you should use the proper types.
It's pointless to have a break statement after a goto statement. The break can never be reached.
If you enabled warnings on your compiler, it probably would have told you about both of those.
"fork bomb", by its nature, can't have any deterministic behaviour. In theory, a computer with infinite resources can keep on forking without any problem.
But in practice, we know computers don't have infinite resources. So, different operating systems might handle the resource drain in different ways.
Typically, when the operating system can't spawn further processes, the kernel might kill the "offending" process(es) in order to free up resources or crash or get into a limbo state. The exponential growth of processes is generally hard to handle for the kernel even if it recognizes it.
So, you just can't expect anything deterministic or repeatable behaviour.

Get User Input without Blocking Endless loop

I have written a simple C program that basically consists of an endless loop that counts upwards. During the loop, the user is asked for input- and here comes the tricky part: the loop should NOT be blocked while waiting on the user, but display his input as soon as he entered it:
int main(void){
int i;
char dec;
for(;;i++){
printf("%d\n", i);
sleep(5);
if(i==4 || i==8){
printf("Please enter Y or N\n");
dec = fgetc(stdin);
printf("%c\n", dec);
}
}
return 0;
}
I found a similiar question for Python here Python. So do I need to push the user interaction into a new thread with pthread or is there an easier option?
Thanks!
EDIT
int main(void){
int i=0;
char dec;
fd_set input_set;
for(;;i++){
printf("%d\n", i);
sleep(2);
if(i==4 || i==8){
FD_ZERO(&input_set ); /* Empty the FD Set */
FD_SET(0, &input_set); /* Listen to the input descriptor */
dec = select(1, &input_set, NULL, NULL, 0);
}
}
return 0;
}
What you want to do is only possible with system dependent libraries. For instance on Unix you would typically use ncurses to get from the user if they have pressed a button.
The reason it is system dependent is that asynchronous IO is not available for all file system streams. In particular User I/O blocks and that block is unavoidable.
If you are committed to having a multi-threaded program that still uses read/write system calls you would need to have two threads, one for I/O and one for everything else. On the everything else thread you could query some shared memory area and see if the I/O thread has written the correct type of data to this shared memory area.
If you are on linux only, check out this SO post : What are the differences between poll and select?
If you are on both and/or you already have pthreads, then use a separate thread.
If you are using Windows, maybe you can try to use keyboard hooks. See SetWindowsHookEx.
It will capture all the keyboard clicks with callback.
If you are usingLinux, maybe you can use this: Non-blocking keyboard read - C/C++

4 Process 4 way synchronization using semaphores (In a C Programming, UNIX environment)

I have a question about synchronizing 4 processes in a UNIX environment. It is very important that no process runs their main functionality without first waiting for the others to "be on the same page", so to speak.
Specifically, they should all not go into their loops without first synchronizing with each other. How do I synchronize 4 processes in a 4 way situation, so that none of them get into their first while loop without first waiting for the others? Note that this is mainly a logic problem, not a coding problem.
To keep things consistent between environments let's just say we have a pseudocode semaphore library with the operations semaphore_create(int systemID), semaphore_open(int semaID), semaphore_wait(int semaID), and semaphore_signal(int semaID).
Here is my attempt and subsequent thoughts:
Process1.c:
int main() {
//Synchronization area (relevant stuff):
int sem1 = semaphore_create(123456); //123456 is an arbitrary ID for the semaphore.
int sem2 = semaphore_create(78901); //78901 is an arbitrary ID for the semaphore.
semaphore_signal(sem1);
semaphore_wait(sem2);
while(true) {
//...do main functionality of process, etc (not really relevant)...
}
}
Process2.c:
int main() {
//Synchronization area (relevant stuff):
int sem1 = semaphore_open(123456);
int sem2 = semaphore_open(78901);
semaphore_signal(sem1);
semaphore_wait(sem2);
while(true) {
//...do main functionality of process etc...
}
}
Process3.c:
int main() {
//Synchronization area (relevant stuff):
int sem1 = semaphore_open(123456);
int sem2 = semaphore_open(78901);
semaphore_signal(sem1);
semaphore_wait(sem2);
while(true) {
//...do main functionality of process etc...
}
}
Process4.c:
int main() {
//Synchronization area (relevant stuff):
int sem1 = semaphore_open(123456);
int sem2 = semaphore_open(78901);
semaphore_signal(sem2);
semaphore_signal(sem2);
semaphore_signal(sem2);
semaphore_wait(sem1);
semaphore_wait(sem1);
semaphore_wait(sem1);
while(true) {
//...do main functionality of process etc...
}
}
We run Process1 first, and it creates all of the semaphores into system memory used in the other processes (the other processes simply call semaphore_open to gain access to those semaphores). Then, all 4 processes have a signal operation, and then a wait. The signal operation causes process1, process2, and process3 to increment the value of sem1 by 1, so it's resultant maximum value is 3 (depending on what order the operating system decides to run these processes in). Process1, 2, and 3, are all waiting then on sem2, and process4 is waiting on sem1 as well. Process 4 then signals sem2 3 times to bring its value back up to 0, and waits on sem1 3 times. Since sem1 was a maximum of 3 from the signalling in the other processes (depending on what order they ran in, again), then it will bring its value back up to 0, and continue running. Thus, all processes will be synchronized.
So yea, not super confident on my answer. I feel that it depends heavily on what order the processes ran in, which is the whole point of synchronization -- that it shouldn't matter what order they run in, they all synchronize correctly. Also, I am doing a lot of work in Process4. Maybe it would be better to solve this using more than 2 semaphores? Wouldn't this also allow for more flexibility within the loops in each process, if I want to do further synchronization?
My question: Please explain why the above logic will or will not work, and/or a solution on how to solve this problem of 4 way synchronization. I'd imagine this is a very common thing to have to think about depending on the industry (eg. banking and synching up bank accounts). I know it is not very difficult, but I have never worked with semaphores before, so I'm kind of confused on how they work.
The precise semantics of your model semaphore library are not clear enough to answer your question definitively. However, if the difference between semaphore_create() and semaphore_open() is that the latter requires the specified semaphore to already exist, whereas the former requires it to not exist, then yes, the whole thing will fall down if process1 does not manage to create the needed semaphores before any of the other processes attempt to open them. (Probably it falls down in different ways if other semantics hold.)
That sort of issue can be avoided in a threading scenario because with threads there is necessarily an initial single-threaded segment wherein the synchronization structures can be initialized. There is also shared memory by which the various threads can communicate with one another. The answer #Dark referred to depends on those characteristics.
The essential problem with a barrier for multiple independent processes -- or for threads that cannot communicate via shared memory and that are not initially synchronized -- is that you cannot know which process needs to erect the barrier. It follows that each one needs to be prepared to do so. That can work in your model library if semaphore_create() can indicate to the caller which result was achieved, one of
semaphore successfully created
semaphore already exists
(or error)
In that case, all participating processes (whose number you must know) can execute the same procedure, maybe something like this:
void process_barrier(int process_count) {
sem_t *sem1, *sem2, *sem3;
int result = semaphore_create(123456, &sem1);
int counter;
switch (result) {
case SEM_SUCCESS:
/* I am the controlling process */
/* Finish setting up the barrier */
semaphore_create(78901, &sem2);
semaphore_create(23432, &sem3);
/* let (n - 1) other processes enter the barrier... */
for (counter = 1; counter < process_count; counter += 1) {
semaphore_signal(sem1);
}
/* ... and wait for those (n - 1) processes to do so */
for (counter = 1; counter < process_count; counter += 1) {
semaphore_wait(sem2);
}
/* let all the (n - 1) waiting processes loose */
for (counter = 1; counter < process_count; counter += 1) {
semaphore_signal(sem3);
}
/* and I get to continue, too */
break;
case SEM_EXISTS_ERROR:
/* I am NOT the controlling process */
semaphore_open(123456, &sem1);
/* wait, if necessary, for the barrier to be initialized */
semaphore_wait(sem1);
semaphore_open(78901, &sem2);
semaphore_open(23432, &sem3);
/* signal the controlling process that I have reached the barrier */
semaphore_signal(sem2);
/* wait for the controlling process to allow me to continue */
semaphore_wait(sem3);
break;
}
}
Obviously, I have taken some minor liberties with your library interface, and I have omitted error checks except where they bear directly on the barrier's operation.
The three semaphores involved in that example serve distinct, well-defined purposes. sem1 guards the initialization of the synchronization constructs and allows the processes to choose which among them takes responsibility for controlling the barrier. sem2 serves to count how many processes have reached the barrier. sem3 blocks the non-controlling processes that have reached the barrier until the controlling process releases them all.

Any problems in this rwlock implementation?

I just implemented a reader-writer lock in C. I want to limit the number of readers, so I use 'num' to count it. I'm not sure whether this implementation has some potential data race or deadlock conditions. So could you help me figuring them out please?
Another question is can I remove the 'spin_lock' in struct _rwlock in someway? Thanks!
#define MAX_READER 16;
typedef _rwlock *rwlock;
struct _rwlock{
spin_lock lk;
unint32_t num;
};
void wr_lock(rwlock lock){
while (1){
if (lock->num > 0) continue;
lock(lock->lk);
lock->num += MAX_READER;
return;
}
}
void wr_unlock(rwlock lock){
lock->num -= MAX_READER;
unlock(lock->lk);
}
void rd_lock(rwlock lock){
while (1){
if (lock->num >= MAX_READER) continue;
atom_inc(num);
return;
}
}
void rd_unlock(rwlock lock){
atom_dec(num);
}
Short answer: Yes, there are severe issues here. I don't know what synchronization library you are using, but you are not protecting access to shared data and you will waste tons of CPU cycles on your loops in rd_lock() and wr_lock(). Spin locks should be avoided in virtually all cases (there are exceptions though).
In wr_lock (and similar in rd_lock):
while (1){
if (lock->num > 0) continue;
This is wrong. If you don't somehow synchronize, you aren't guaranteed to see changes from other threads. If this were the only problem you could perhaps acquire the lock and then check the count.
In rd_lock:
atom_inc(num);
This doesn't play well with the non-atomic += and -= in the writer functions, because it can interrupt them. Same for the decrement in rd_unlock.
rd_lock can return while a thread holds the lock as writer -- this isn't the usual semantics of a reader-writer lock, and it means that whatever your rw-lock is supposed to protect, it will not protect it.
If you are using pthreads, then it already has a rwlock. On Windows consider SRWlocks (never used 'em myself). For portable code, build your rwlock using a condition variable (or maybe two -- one for readers and one for writers). That is, insofar as multi-threaded code in C can be portable. C11 has a condition variable, and if there's a pre-C11 threads implementation out there that doesn't, I don't want to have to use it ;-)

Resources