I have just adding threading to a large application I have been developing for years. It is written in C and runs on Mac and Linux. This question is about OS X, 10.8.2 or 10.6.8.
Problem: I see the program opening two threads as I expect. However, apparently both threads are running on the same CPU, or at least, I never get more than 100% of a CPU allocated to the program. This almost defeats the entire purpose of having threads.
I use a fair number of mutexes, if that matters.
How can I force the OS to run each thread at 100% of different CPUs? (There are 8 CPUs on this machine.)
The mutexes may matter a lot here. Open up Instruments and run the time profiler instrument on your program after setting it to "record all thread states". This will let you see where your threads are blocked waiting for something (likely a mutex) instead of running.
Multiple running threads will be concurrent as long as they execute on different cores - as each core has it's own instance of the scheduler in every Unix-like OS. Being on separate CPU dies matters little: if fact, there's a benefit to sharing resources between threads running on separate cores of the same die.
Related
I have a C program (graphics benchamrk) that runs on a MIPS processor simulator(I'm looking to graph some performance characteristics). The processor has 8 cores but it seems like core 0 is executing more than its fair share of instructions. The benchmark is multithreaded with the work exactly distributed between the threads. Why could it be that core 0 happens to run about between 1/4 and half the instructions even though it is multithreaded on a 8 core processor?
What are some possible reasons this could be happening?
Most application workloads involve some number of system calls, which could block (e.g. for I/O). It's likely that your threads spend some amount of time blocked, and the scheduler simply runs them on the first available core. In an extreme case, if you have N threads but each is able to do work only 1/N of the time, a single core is sufficient to service the entire workload.
You could use pthread_setaffinity_np to assign each thread to a specific core, then see what happens.
You did not mention which OS you are using.
However, most of the code in most OSs is still written for a single core CPU.
Therefore, the OS will not try to evenly distribute the processes over the array of cores.
When there are multiple cores available, most OSs start a process on the first core that is available (and a blocked process leaves the related core available.)
As an example, on my system (a 4 core amd-64) running ubuntu linux 14.04, the CPUs are usually less than 1 percent busy, So everything could run on a single core.
There must be lots of applications running like videos and background long running applications, with several windows open to show much real activity on other than the first core.
what happens when we set different processor affinity to process and its thread in linux.
I am trying to start a process affined to a core (say 1) which have two threads one of which need to run on other core (say 0)
When i tried to set affinity to thread different to process the program got executed. but I want to know the hidden impacts of this approach.
Threads and processes are vastly the same thing. Whether you call pthread_setaffinity... or use the sched_setaffinity syscall, they both affect the current thread's affinity mask. This may be your "process" thread, or a thread you created.
However, note that a new thread created by pthread_create inherits a copy of its creator's CPU affinity mask [1].
That means that setting the affinity and creating a thread is not the same as creating a thread and setting the affinity. In the first case, both threads will compete over the same processor (which is most probably not what you want) and in the second case they will be bound to different processors.
Also note that while binding threads to a dedicated processor (core) may have advantages in some situations, it may just as well be a very stupid thing to do. Playing with the affinity mask means you limit the scheduler in what it can do to make your program run. If the core you bound your thread to isn't available, your thread will not run, end of story.
This is a very similar reasoning/strategy as disabling swap to make the system "faster" (some users still do that!). By doing so they usually gain nothing, all they do is limit what the memory manager can do by removing one option of providing a free page once it runs out of unused physical RAM. Usually this means something more or less valuable from the buffer cache is purged when instead some private page that wasn't used in hours could have been swapped out.
Usually people use affinity because they have this idea that the scheduler will make threads bounce between processor cores all the time and this is bad. Processor migration indeed is not cheap, but the scheduler has a mechanism which makes sure it does not happen before a certain minimum amount of time (there is a /proc thingie for that too). After a longer amount of time, all advantages of staying at the old core (TLB, cache) are usually gone anyway, so running on a different core which is readily available is actually better than waiting for a particular core to maybe, eventually become available.
NUMA architectures may be a different topic, but I'd assume (though I don't know for sure) that the scheduler is smart enough not to silently migrate a thread to a different node. In general, however, I'd recommend not to play with affinity at all.
Affinity is a common first line approach to limiting jitter in HPC. Typically LINUX processes and threads and such are constrained to a small but sufficient set of CPUs and the application is constrained to the remainder of the CPUs.
Affinity is very useful with device drivers. Consider for example an Infiniband adapter being used by an application. The adapter will perform best if the driver thread(s) are constrained to CPUs on the same (or closest if none) NUMA node as the adapter. LINUX doesn't know the application thread so can't even consider any affinity for performance.
Can setting the cpu affinity in linux for a multithreaded program where each thread runs on each core effectively block any other process from being scheduled by the os on that core. Effectively I want to guarantee the use of a core to my process and have all other non critical programs bound to a minimal number of cores.
Or am I missing something with the linux scheduler, or maybe I need my own.
Can setting the cpu affinity in linux for a multithreaded program
where each thread runs on each core effectively block any other
process from being scheduled by the os on that core
No, setting the cpu affinity prevents the scheduler from using some cores for your threads. That is, it will only schedule your threads on certain cores - it doesn't do anything to other threads.
You can probably achieve what you want using setpriority. If your requirements are that stringent, you might look into sched_setscheduler and choose SCHED_RR or SCHED_FIFO.
When the scheduler is actively involved, taskset and nice will only give a hint to the scheduler about your preferences. The scheduler is free to reschedule any of the threads on any of the available cores based on the workload. You can use perf to monitor context switches and cpu migrations.
You have two options:
You can force the scheduler to follow your orders trough sched_setscheduler as user417896 sugggested.
You can use cgroups/cpuset to define two cpusets,say system and isolated, and isolate the target cores by moving all the system threads to system cpuset and run your program using cgexec on the isolated cpuset. You can assign cores and memory to a cpuset and in order to isolate it set cpu_exclusive bit and you are all set. You an also use cset (http://code.google.com/p/cpuset/) if you are using older kernels to automate this process for you.
I hope it helps.
I am in a real fix. Please help. Its urgent.
I have a host process that spawns multiple host(CPU) threads (pthreads). These threads in turn call the CUDA kernel. These CUDA kernels are written by external users. So it might be bad kernels that enter infinite loop. In order to overcome this I have put a time-out of 2 mins that will kill the corresponding CPU thread.
Will killing the CPU thread also kill the kernel running on the GPU? As far as what I have tested it does'nt.
How can I kill all the threads currently running in the GPU?
Edit: The reason I am using CPU threads that call the kernel is because, the sever has two Tesla GPU's. So the thread will schedule the kernel on the GPU device alternatively.
Thanks,
Arvind
It doesn't seem to. I ran a broken kernel and locked up one of my devices seemingly indefinitely (until reboot). I'm not sure how to kill running kernel. I think there is a way to limit kernel execution time via the driver, though, so that might be the way to go.
Unless there's a larger part of this I'm not really getting, You might be better off using CUDA Streams api for multi-device tasking, but YMMV.
As for the killing; if you're running the cards with a display (and x server) attached, they will automatically timeout after 5 seconds (again, YMMV).
Assuming that this isn't the case; check out calling cudaDeviceReset() API Reference; from the 'parent' thread after your own prescribed 'kill' timeout.
I have not implemented this function in my own code yet so honestly have no idea if it'll work in your situation, but its worth investigation.
Will killing the CPU thread also kill the kernel running on the GPU? As far as what I have tested it does'nt.
Probably not. On Linux u can use cuda-gdb to figure that out.
I don't see the point of sending multiple kernels to the GPU using threads.. I wonder what happens if you send multiple Kernels to the GPU at time.. Will the thread scheduler of the GPU deal with that?
I have two threads in my application. Is it possible to execute both the threads simultaneously without sleeping any thread?
You can run the threads parallel in your application especially if they are not waiting on each other for some inputs or conditions. For example: One thread may be parsing a file and other maybe playing a song in your application.
Generally OS takes care of the thread time slicing. So at the application level it would look like these threads are running parallel but the OS does the time slicing giving each thread certain execution time.
With multi-core processors/cores it is possible to run the threads parallel in realtime, however the OS decides which threads to run unless you specifically code at lower level to ensure which threads you want to run in parallel.
As others have mentioned, with multiple cores it is possible, but, it depends on how the OS decides to distribute the threads. You don't have any control, that I have seen, on dictating where each thread is ran.
For a really good tutorial, with some nice explanation and pictures you can look at this page, with code as to how to do multi-threading using the POSIX library.
http://www.pathcom.com/~vadco/parallel.html
The time slice for sleep is hard to see, so your best bet is to test it out, for example, have your two threads begin to count every millisecond, and see if the two are identical. If they are not, then at least one is going to sleep by the cpu.
Most likely both will go to sleep at some time, the test is to see how much of a difference there is between the two threads.
Once one thread blocks, either waiting to send data, or waiting to receive, it will be put to sleep so that other threads can run, so that the OS can continue to make certain everything is working properly.
C does not, itself, have any means to do multi-threaded code.
However, POSIX has libraries that allow you to work with threads in C.
One good article about this topic is How to write multi-threaded software in C and C++.
Yes, if you have multiple processors or multi-core processors. One thread will run in one core.