Parallel Threads in C - c

I have two threads in my application. Is it possible to execute both the threads simultaneously without sleeping any thread?

You can run the threads parallel in your application especially if they are not waiting on each other for some inputs or conditions. For example: One thread may be parsing a file and other maybe playing a song in your application.
Generally OS takes care of the thread time slicing. So at the application level it would look like these threads are running parallel but the OS does the time slicing giving each thread certain execution time.
With multi-core processors/cores it is possible to run the threads parallel in realtime, however the OS decides which threads to run unless you specifically code at lower level to ensure which threads you want to run in parallel.

As others have mentioned, with multiple cores it is possible, but, it depends on how the OS decides to distribute the threads. You don't have any control, that I have seen, on dictating where each thread is ran.
For a really good tutorial, with some nice explanation and pictures you can look at this page, with code as to how to do multi-threading using the POSIX library.
http://www.pathcom.com/~vadco/parallel.html
The time slice for sleep is hard to see, so your best bet is to test it out, for example, have your two threads begin to count every millisecond, and see if the two are identical. If they are not, then at least one is going to sleep by the cpu.
Most likely both will go to sleep at some time, the test is to see how much of a difference there is between the two threads.
Once one thread blocks, either waiting to send data, or waiting to receive, it will be put to sleep so that other threads can run, so that the OS can continue to make certain everything is working properly.

C does not, itself, have any means to do multi-threaded code.
However, POSIX has libraries that allow you to work with threads in C.
One good article about this topic is How to write multi-threaded software in C and C++.

Yes, if you have multiple processors or multi-core processors. One thread will run in one core.

Related

How to cause arbitrary sleeps for C program at runtime

Is there a way to make threads executing a C program to sleep at arbitrary points of execution?
I'm interested of testing the robustness of an implementation of distributed algorithm and I'd like to run different scenarios repeatedly so that threads would be suspended randomly.
(Adding sleeps to the code is not what I'm looking for.)
Thanks!
I'd like to run different scenarios repeatedly so that threads would be suspended randomly.
One way to achieve this (on a UNIX system) is to run the program under a debugger, which can suspend and resume threads at arbitrary points in time.
Debugger here doesn't mean GDB -- it could be any program utilizing ptrace to control the target.
Unfortunately writing such a control program / debugger from scratch is quite involved -- you'll need to implement a lot of low-level thread tracking.
Since you control the target program, you can cheat by making it create all the threads, then block them all before doing any work. Now the debugger program can attach all the threads, unblock them, and start suspending them at random.
Another alternative is to modify an existing debugger to do your bidding. This would be hard to do with GDB, but is probably easier with LLDB.

Fork : concurrency or parallelism

I was recently doing some experiments with the fork function and I was concerned by a "simple" (short) question:
Does fork uses concurrency or parallelism (if more than one core) mechanisms?
Or, is it the OS which makes the best choice?
Thanks for your answer.
nb: Damn I fork bombed myself again!
Edit:
Concurrency: Each operation run on one core. Interrupts are received in order to switch from one process to another (Sequential computation).
Parallelism: Each operation run on two cores (or more).
fork() duplicates the current process, creating another independent process. End of story.
How the kernel chooses to schedule these processes is a different, very broad question. In general the kernel will try to use all available resources (cores) to run as many tasks as possible. When there are more runnable tasks than cores, it has to start making decisions about who gets to run, and for how long.
The fork function creates a separate process. It is up to the operating system how it handles different processes.
Of course, if only once core is available, the OS has no other choice but running all processes interleaved.
If more cores are available, every sane OS will distribute the processes to the different cores, so every core runs at least one process.
However, even then, more processes can be active than there are cores. So even then, it is up to the OS to decide which processes can be run parallel (by distributing to cores) and which have to be run interleaved (on a single core).
In fact, fork() is a system call (aka. system service) which creates a new process from the current process (read the return code to see who you are, the parent or the child).
In the UNIX work, processes shares the CPU computing time. This works like that :
a process is running
the clock generates an interrupt, calling the kernel and pausing the process
the kernel takes the list of available processes, and decide to resume one (this is called scheduling)
go to point 1)
When there is multiples processor cores, kernels are able to dispatch processes on them.
Well, you can do something. Write a complex program say, O(n^3), so that it takes a good amount of time to compute. fork() four times (if you have quad-core). Now open any graphical CPU monitor. Anything cool?

Threading on Mac OS X, force multiple CPUs

I have just adding threading to a large application I have been developing for years. It is written in C and runs on Mac and Linux. This question is about OS X, 10.8.2 or 10.6.8.
Problem: I see the program opening two threads as I expect. However, apparently both threads are running on the same CPU, or at least, I never get more than 100% of a CPU allocated to the program. This almost defeats the entire purpose of having threads.
I use a fair number of mutexes, if that matters.
How can I force the OS to run each thread at 100% of different CPUs? (There are 8 CPUs on this machine.)
The mutexes may matter a lot here. Open up Instruments and run the time profiler instrument on your program after setting it to "record all thread states". This will let you see where your threads are blocked waiting for something (likely a mutex) instead of running.
Multiple running threads will be concurrent as long as they execute on different cores - as each core has it's own instance of the scheduler in every Unix-like OS. Being on separate CPU dies matters little: if fact, there's a benefit to sharing resources between threads running on separate cores of the same die.

Parallel I/O with POSIX threads in C

Is there a simple way in the C language, using POSIX threads, to send all the file output of a program (e.g. fprintf...) to a cpu core other than the one that is executing the code? I mean in such a way that the code keeps on flowing and does not need to wait for the file to have been written to continue.
My program does numerical integration and at every step of integration writes data to a file.
Thank you.
You can find the information regarding your question on how to set a particular thread on a particular core from:
how to set CPU affinity of a particular pthread?
However setting the 2 thread(writer thread and other thread) on different core does not mean that you does not require to synchronize between these threads. You would have to synchronize between these two threads. By setting 2 threads on different core may give you the better througput(but it depends on many other factor).

Programming a relatively large, threaded application for old systems

Today my boss and I were having a discussion about some code I had written. My code downloads 3 files from a given HTTP/HTTPS link. I had multi-threaded the download so that all 3 files are downloading simultaneously in 3 separate threads. During this discussion, my boss tells me that the code is going to be shipped to people who will most likely be running old hardware and software (I'm talking Windows 2000).
Until this time, I had never considered how a threaded application would scale on older hardware. I realize that if the CPU has only 1 core, threads are useless and may even worsen performance. I have been wondering if this download task is an I/O operation. Meaning, if an API is blocked waiting for information from the HTTP/HTTPS server, will another thread that wants to do some calculation be scheduled meanwhile? Do older OSes do such scheduling?
Another thing he said: Since the code is going to be run on old machines, my application should not eat the CPU. He said use Sleep() calls after CPU intensive tasks to allow other programs some breathing space. Now I was always under the impression that using Sleep() is terrible in any program. Am I wrong? When is using Sleep() justified?
Thanks for looking!
I have been wondering if this download task is an I/O operation.
Meaning, if an API is blocked waiting for information from the
HTTP/HTTPS server, will another thread that wants to do some
calculation be scheduled meanwhile? Do older OSes do such scheduling?
Yes they do. That's the joke of having blocked IO. The thread is suspended and other calculations (threads) take place until an event wakes up the blocked thread. That's why it makes completely sense to split it up into threads even for single core machines instead of doing some poor man scheduling between the downloads yourself in a single thread.
Of course your downloads affect each other regarding bandwith, so threading won't help to speedup the download :-)
Another thing he said: Since the code is going to be run on old
machines, my application should not eat the CPU. He said use Sleep()
calls after CPU intensive tasks to allow other programs some breathing
space.
Actually using sleep AFTER the task finished won't help here. Doing Sleep after a certain time of calculation (doing sort of time slicing) before going on with the calculation could help. But this is only true for cooperative systems (e.g. like Windows 3.11). This does not play a role for preemptive systems where the scheduler uses time slicing to allocate calculation time to threads. Here it would be more important to think about lowering the priority for CPU intensive tasks in order to give other tasks precedence...
Now I was always under the impression that using Sleep() is terrible
in any program. Am I wrong? When is using Sleep() justified?
This really depends on what you are doing. If you implement sort of busy waiting for a certain flag being set which is set maybe after few seconds it's better to recheck if it's set after going to sleep for a while in order to give up your scheduled time slice instead of just buring CPU power with checking for a flag never being set.
In modern systems there is no sense in introducing Sleep in a calculation as it will only slow down your calculation.
Scheduling is subject to the OS's scheduler. He's the one with the "big picture". In my opinion every approach to "do it better" is only valid inside the scope of a specific application where you have the overview over certain relationships that are not obvious to the scheduler.
Addendum:
I did some research and found that Windows supports preemptive multitasking from Windows 95. The Windows NT-line (where Windows 2000 belongs to) always supported preemptive multitasking.

Resources