Not able to kill bad kernel running on NVIDIA GPU - c

I am in a real fix. Please help. Its urgent.
I have a host process that spawns multiple host(CPU) threads (pthreads). These threads in turn call the CUDA kernel. These CUDA kernels are written by external users. So it might be bad kernels that enter infinite loop. In order to overcome this I have put a time-out of 2 mins that will kill the corresponding CPU thread.
Will killing the CPU thread also kill the kernel running on the GPU? As far as what I have tested it does'nt.
How can I kill all the threads currently running in the GPU?
Edit: The reason I am using CPU threads that call the kernel is because, the sever has two Tesla GPU's. So the thread will schedule the kernel on the GPU device alternatively.
Thanks,
Arvind

It doesn't seem to. I ran a broken kernel and locked up one of my devices seemingly indefinitely (until reboot). I'm not sure how to kill running kernel. I think there is a way to limit kernel execution time via the driver, though, so that might be the way to go.

Unless there's a larger part of this I'm not really getting, You might be better off using CUDA Streams api for multi-device tasking, but YMMV.
As for the killing; if you're running the cards with a display (and x server) attached, they will automatically timeout after 5 seconds (again, YMMV).
Assuming that this isn't the case; check out calling cudaDeviceReset() API Reference; from the 'parent' thread after your own prescribed 'kill' timeout.
I have not implemented this function in my own code yet so honestly have no idea if it'll work in your situation, but its worth investigation.

Will killing the CPU thread also kill the kernel running on the GPU? As far as what I have tested it does'nt.
Probably not. On Linux u can use cuda-gdb to figure that out.
I don't see the point of sending multiple kernels to the GPU using threads.. I wonder what happens if you send multiple Kernels to the GPU at time.. Will the thread scheduler of the GPU deal with that?

Related

linux c: what's the common use case of "sched_setaffinity" function? I don't find it useful

The operating system is able to determine how to arrange difference processes/threads onto different cpu cores, the os scheduler does the work well. So when do we really need to call functions like sched_setafficity() for a process, or pthread_setaffinity_np() for a pthread?
It doesn't seem to be able to raise any performance dramatically, if it can, then I suppose we need to re-write linux process scheduler right?
Just wish to know when do we need to call these functions, in my applications?
Thanks.
It's very helpful in some computationally intensive real time processes related to DSP(Digital Signal Processing).
Let's say One real time DSP related process PROCESS0 is running on core CPU0. Because of some scheduling algorithms CPU0 pre-emption need to happen such that process0 has to run on another CPU. This switching of realtime process is a overhead. Hence affinity. We direct to kernel that the process0 should run on CPU0.

Threading on Mac OS X, force multiple CPUs

I have just adding threading to a large application I have been developing for years. It is written in C and runs on Mac and Linux. This question is about OS X, 10.8.2 or 10.6.8.
Problem: I see the program opening two threads as I expect. However, apparently both threads are running on the same CPU, or at least, I never get more than 100% of a CPU allocated to the program. This almost defeats the entire purpose of having threads.
I use a fair number of mutexes, if that matters.
How can I force the OS to run each thread at 100% of different CPUs? (There are 8 CPUs on this machine.)
The mutexes may matter a lot here. Open up Instruments and run the time profiler instrument on your program after setting it to "record all thread states". This will let you see where your threads are blocked waiting for something (likely a mutex) instead of running.
Multiple running threads will be concurrent as long as they execute on different cores - as each core has it's own instance of the scheduler in every Unix-like OS. Being on separate CPU dies matters little: if fact, there's a benefit to sharing resources between threads running on separate cores of the same die.

Programming a relatively large, threaded application for old systems

Today my boss and I were having a discussion about some code I had written. My code downloads 3 files from a given HTTP/HTTPS link. I had multi-threaded the download so that all 3 files are downloading simultaneously in 3 separate threads. During this discussion, my boss tells me that the code is going to be shipped to people who will most likely be running old hardware and software (I'm talking Windows 2000).
Until this time, I had never considered how a threaded application would scale on older hardware. I realize that if the CPU has only 1 core, threads are useless and may even worsen performance. I have been wondering if this download task is an I/O operation. Meaning, if an API is blocked waiting for information from the HTTP/HTTPS server, will another thread that wants to do some calculation be scheduled meanwhile? Do older OSes do such scheduling?
Another thing he said: Since the code is going to be run on old machines, my application should not eat the CPU. He said use Sleep() calls after CPU intensive tasks to allow other programs some breathing space. Now I was always under the impression that using Sleep() is terrible in any program. Am I wrong? When is using Sleep() justified?
Thanks for looking!
I have been wondering if this download task is an I/O operation.
Meaning, if an API is blocked waiting for information from the
HTTP/HTTPS server, will another thread that wants to do some
calculation be scheduled meanwhile? Do older OSes do such scheduling?
Yes they do. That's the joke of having blocked IO. The thread is suspended and other calculations (threads) take place until an event wakes up the blocked thread. That's why it makes completely sense to split it up into threads even for single core machines instead of doing some poor man scheduling between the downloads yourself in a single thread.
Of course your downloads affect each other regarding bandwith, so threading won't help to speedup the download :-)
Another thing he said: Since the code is going to be run on old
machines, my application should not eat the CPU. He said use Sleep()
calls after CPU intensive tasks to allow other programs some breathing
space.
Actually using sleep AFTER the task finished won't help here. Doing Sleep after a certain time of calculation (doing sort of time slicing) before going on with the calculation could help. But this is only true for cooperative systems (e.g. like Windows 3.11). This does not play a role for preemptive systems where the scheduler uses time slicing to allocate calculation time to threads. Here it would be more important to think about lowering the priority for CPU intensive tasks in order to give other tasks precedence...
Now I was always under the impression that using Sleep() is terrible
in any program. Am I wrong? When is using Sleep() justified?
This really depends on what you are doing. If you implement sort of busy waiting for a certain flag being set which is set maybe after few seconds it's better to recheck if it's set after going to sleep for a while in order to give up your scheduled time slice instead of just buring CPU power with checking for a flag never being set.
In modern systems there is no sense in introducing Sleep in a calculation as it will only slow down your calculation.
Scheduling is subject to the OS's scheduler. He's the one with the "big picture". In my opinion every approach to "do it better" is only valid inside the scope of a specific application where you have the overview over certain relationships that are not obvious to the scheduler.
Addendum:
I did some research and found that Windows supports preemptive multitasking from Windows 95. The Windows NT-line (where Windows 2000 belongs to) always supported preemptive multitasking.

Whole one core dedicated to single process

Is there any way in Linux to assign one CPU core to a particular given process and there should not be any other processes or interrupt handlers to be scheduled on this core?
I have read about process affinity in Linux Binding Processes to CPUs using the taskset utility but that's not solving my problem because it just try to affine the given process to that core but it is possible that other processes may be scheduled on this core and this is what I want to avoid.
Should we change the kernel code for scheduling?
Yes there is. In fact, there are two separate ways to do it :-)
Right now, the best way to accomplish what you want is to do the following:
Add the parameter isolcpus=[cpu_number] to the Linux kernel command line from the boot loader during boot. This will instruct the Linux scheduler not to run any regular tasks on that CPU unless specifically requested using cpu affinity.
Use IRQ affinity to set other CPUs to handle all interrupts so that your isolated CPU will not receive any interrupts.
Use CPU affinity to fix your specific task to the isolated CPU.
This will give you the best that Linux can provide with regard to CPU isolation without out-of-tree and in-development patches.
Your task will still get interrupted from time to time by Linux code, including other tasks - such as the timer tick interrupt and the scheduler code, IPIs from other CPUs and stuff like work queue kernel threads, although the interruption should be quite minimal.
For an (almost) complete list of interruption sources, check out my page at https://github.com/gby/linux/wiki
The alternative method is to use cpusets which is way more elegant and dynamic but suffers from some weaknesses at this point in time (no migration of timers for example) which makes me recommend the old, crude but effective isolcpus parameter.
Note that work is currently being done by the Linux community to address all these issues and more to give even better isolation.
There is Redhat article talking about it. It modifies the boot parameter isolcpus.
And an old article written by Robert Love. And there is solution in that article.
All of a process' children receive the same CPU affinity mask as their
parent.
Then, all we need to do is have init bind itself to one processor.
All other processes, by nature of init being the root of the process
tree and thus the superparent of all processes, are then likewise
bound to the one processor.
Dedicate a Whole CPU Core to a Particular Program
While taskset allows a particular program to be assigned to certain CPUs, that does not mean that no other programs or processes will be scheduled on those CPUs. If you want to prevent this and dedicate a whole CPU core to a particular program, you can use "isolcpus" kernel parameter, which allows you to reserve the CPU core during boot.
Add the kernel parameter "isolcpus=" to the boot loader during boot or GRUB configuration file. Then the Linux scheduler will not schedule any regular process on the reserved CPU core(s), unless specifically requested with taskset. For example, to reserve CPU cores 0 and 1, add "isolcpus=0,1" kernel parameter. Upon boot, then use taskset to safely assign the reserved CPU cores to your program.
Source(s)
http://xmodulo.com/2013/10/run-program-process-specific-cpu-cores-linux.html
http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re46.html
Even if you follow the steps in gby's answer, kernel tasks are executed on the isolated CPU core. Work is underway in the linux RT_PREEMPT real time project to improve this. So if you are not using a bleeding edge real time kernel from RP_PREEMPT, it might not be possible to completely isolate a CPU core.
As per documentation
The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs.
There is no mention that specific processor will be given to process exclusively.

Parallel Threads in C

I have two threads in my application. Is it possible to execute both the threads simultaneously without sleeping any thread?
You can run the threads parallel in your application especially if they are not waiting on each other for some inputs or conditions. For example: One thread may be parsing a file and other maybe playing a song in your application.
Generally OS takes care of the thread time slicing. So at the application level it would look like these threads are running parallel but the OS does the time slicing giving each thread certain execution time.
With multi-core processors/cores it is possible to run the threads parallel in realtime, however the OS decides which threads to run unless you specifically code at lower level to ensure which threads you want to run in parallel.
As others have mentioned, with multiple cores it is possible, but, it depends on how the OS decides to distribute the threads. You don't have any control, that I have seen, on dictating where each thread is ran.
For a really good tutorial, with some nice explanation and pictures you can look at this page, with code as to how to do multi-threading using the POSIX library.
http://www.pathcom.com/~vadco/parallel.html
The time slice for sleep is hard to see, so your best bet is to test it out, for example, have your two threads begin to count every millisecond, and see if the two are identical. If they are not, then at least one is going to sleep by the cpu.
Most likely both will go to sleep at some time, the test is to see how much of a difference there is between the two threads.
Once one thread blocks, either waiting to send data, or waiting to receive, it will be put to sleep so that other threads can run, so that the OS can continue to make certain everything is working properly.
C does not, itself, have any means to do multi-threaded code.
However, POSIX has libraries that allow you to work with threads in C.
One good article about this topic is How to write multi-threaded software in C and C++.
Yes, if you have multiple processors or multi-core processors. One thread will run in one core.

Resources