I am preparing for a certification and trying to know the difference between processor affinity and I/O affinity. Would be thankful if someone could explain that to me in simple words.. Tried to learn about it on MS links, but got confused. Many thanks
Well I am no DBA but as far I understand, SQL Server runs on multiple thread (spawns multiple thread for serving request) being a multi threaded application.
You can specify/Map a particular thread(s) to work on specific CPU(s) (since high end server machines will run on 16 or more CPU). that is known as Processor Affinity.
Whereas, The affinity I/O mask or I/O Affinity option binds SQL Server disk I/O to a specified subset of CPU(s)
From MSDN Documentation, specific excerpt
To carry out multitasking, Microsoft Windows 2000 and Windows Server
2003 sometimes move process threads among different processors.
Although efficient from an operating system point of view, this
activity can reduce Microsoft SQL Server performance under heavy
system loads, as each processor cache is repeatedly reloaded with
data. Assigning processors to specific threads can improve performance
under these conditions by eliminating processor reloads; such an
association between a thread and a processor is called processor
affinity.
SQL Server supports processor affinity by means of two affinity mask
options: affinity mask (also known as CPU affinity mask) and
affinity I/O mask
Related
I have a C program (graphics benchamrk) that runs on a MIPS processor simulator(I'm looking to graph some performance characteristics). The processor has 8 cores but it seems like core 0 is executing more than its fair share of instructions. The benchmark is multithreaded with the work exactly distributed between the threads. Why could it be that core 0 happens to run about between 1/4 and half the instructions even though it is multithreaded on a 8 core processor?
What are some possible reasons this could be happening?
Most application workloads involve some number of system calls, which could block (e.g. for I/O). It's likely that your threads spend some amount of time blocked, and the scheduler simply runs them on the first available core. In an extreme case, if you have N threads but each is able to do work only 1/N of the time, a single core is sufficient to service the entire workload.
You could use pthread_setaffinity_np to assign each thread to a specific core, then see what happens.
You did not mention which OS you are using.
However, most of the code in most OSs is still written for a single core CPU.
Therefore, the OS will not try to evenly distribute the processes over the array of cores.
When there are multiple cores available, most OSs start a process on the first core that is available (and a blocked process leaves the related core available.)
As an example, on my system (a 4 core amd-64) running ubuntu linux 14.04, the CPUs are usually less than 1 percent busy, So everything could run on a single core.
There must be lots of applications running like videos and background long running applications, with several windows open to show much real activity on other than the first core.
what happens when we set different processor affinity to process and its thread in linux.
I am trying to start a process affined to a core (say 1) which have two threads one of which need to run on other core (say 0)
When i tried to set affinity to thread different to process the program got executed. but I want to know the hidden impacts of this approach.
Threads and processes are vastly the same thing. Whether you call pthread_setaffinity... or use the sched_setaffinity syscall, they both affect the current thread's affinity mask. This may be your "process" thread, or a thread you created.
However, note that a new thread created by pthread_create inherits a copy of its creator's CPU affinity mask [1].
That means that setting the affinity and creating a thread is not the same as creating a thread and setting the affinity. In the first case, both threads will compete over the same processor (which is most probably not what you want) and in the second case they will be bound to different processors.
Also note that while binding threads to a dedicated processor (core) may have advantages in some situations, it may just as well be a very stupid thing to do. Playing with the affinity mask means you limit the scheduler in what it can do to make your program run. If the core you bound your thread to isn't available, your thread will not run, end of story.
This is a very similar reasoning/strategy as disabling swap to make the system "faster" (some users still do that!). By doing so they usually gain nothing, all they do is limit what the memory manager can do by removing one option of providing a free page once it runs out of unused physical RAM. Usually this means something more or less valuable from the buffer cache is purged when instead some private page that wasn't used in hours could have been swapped out.
Usually people use affinity because they have this idea that the scheduler will make threads bounce between processor cores all the time and this is bad. Processor migration indeed is not cheap, but the scheduler has a mechanism which makes sure it does not happen before a certain minimum amount of time (there is a /proc thingie for that too). After a longer amount of time, all advantages of staying at the old core (TLB, cache) are usually gone anyway, so running on a different core which is readily available is actually better than waiting for a particular core to maybe, eventually become available.
NUMA architectures may be a different topic, but I'd assume (though I don't know for sure) that the scheduler is smart enough not to silently migrate a thread to a different node. In general, however, I'd recommend not to play with affinity at all.
Affinity is a common first line approach to limiting jitter in HPC. Typically LINUX processes and threads and such are constrained to a small but sufficient set of CPUs and the application is constrained to the remainder of the CPUs.
Affinity is very useful with device drivers. Consider for example an Infiniband adapter being used by an application. The adapter will perform best if the driver thread(s) are constrained to CPUs on the same (or closest if none) NUMA node as the adapter. LINUX doesn't know the application thread so can't even consider any affinity for performance.
I have Linphone open source application that uses x264 encoder. By default it runs on one thread:
x264_param_t *params= .....
params->i_threads=1;
I added ability to use all processors:
long num_cpu=1;
SYSTEM_INFO sysinfo;
GetSystemInfo( &sysinfo );
num_cpu = sysinfo.dwNumberOfProcessors;
params->i_threads=num_cpu;
The question is how do I know that during video streaming x264 runs on (in my case) 4 processors?
Because from Task Manager -> Performance -> CPU usage history doesn't clear.
I use windows 7
Thanks,
There are three easy to see indications that encoding leverages multiple cores:
Encoding runs faster
Per core CPU load indicates simultaneous load on several cores/processors
Per thread CPU load of your application shows relevant load on multiple threads
Also, you can use processor affinity mask (programmatically, and via Task Manager) to limit the application to single CPU. If x264 is using multiple processors, setting the mask will seriously affect application performance.
In windows task manager, be sure to select View -> CPU History -> One Graph Per CPU. If it still does not look like all processor cores are running at full speed, then possibly some resource is starving the encoding threads, and there's a bottleneck feeding data into the encoder.
Is there any way in Linux to assign one CPU core to a particular given process and there should not be any other processes or interrupt handlers to be scheduled on this core?
I have read about process affinity in Linux Binding Processes to CPUs using the taskset utility but that's not solving my problem because it just try to affine the given process to that core but it is possible that other processes may be scheduled on this core and this is what I want to avoid.
Should we change the kernel code for scheduling?
Yes there is. In fact, there are two separate ways to do it :-)
Right now, the best way to accomplish what you want is to do the following:
Add the parameter isolcpus=[cpu_number] to the Linux kernel command line from the boot loader during boot. This will instruct the Linux scheduler not to run any regular tasks on that CPU unless specifically requested using cpu affinity.
Use IRQ affinity to set other CPUs to handle all interrupts so that your isolated CPU will not receive any interrupts.
Use CPU affinity to fix your specific task to the isolated CPU.
This will give you the best that Linux can provide with regard to CPU isolation without out-of-tree and in-development patches.
Your task will still get interrupted from time to time by Linux code, including other tasks - such as the timer tick interrupt and the scheduler code, IPIs from other CPUs and stuff like work queue kernel threads, although the interruption should be quite minimal.
For an (almost) complete list of interruption sources, check out my page at https://github.com/gby/linux/wiki
The alternative method is to use cpusets which is way more elegant and dynamic but suffers from some weaknesses at this point in time (no migration of timers for example) which makes me recommend the old, crude but effective isolcpus parameter.
Note that work is currently being done by the Linux community to address all these issues and more to give even better isolation.
There is Redhat article talking about it. It modifies the boot parameter isolcpus.
And an old article written by Robert Love. And there is solution in that article.
All of a process' children receive the same CPU affinity mask as their
parent.
Then, all we need to do is have init bind itself to one processor.
All other processes, by nature of init being the root of the process
tree and thus the superparent of all processes, are then likewise
bound to the one processor.
Dedicate a Whole CPU Core to a Particular Program
While taskset allows a particular program to be assigned to certain CPUs, that does not mean that no other programs or processes will be scheduled on those CPUs. If you want to prevent this and dedicate a whole CPU core to a particular program, you can use "isolcpus" kernel parameter, which allows you to reserve the CPU core during boot.
Add the kernel parameter "isolcpus=" to the boot loader during boot or GRUB configuration file. Then the Linux scheduler will not schedule any regular process on the reserved CPU core(s), unless specifically requested with taskset. For example, to reserve CPU cores 0 and 1, add "isolcpus=0,1" kernel parameter. Upon boot, then use taskset to safely assign the reserved CPU cores to your program.
Source(s)
http://xmodulo.com/2013/10/run-program-process-specific-cpu-cores-linux.html
http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re46.html
Even if you follow the steps in gby's answer, kernel tasks are executed on the isolated CPU core. Work is underway in the linux RT_PREEMPT real time project to improve this. So if you are not using a bleeding edge real time kernel from RP_PREEMPT, it might not be possible to completely isolate a CPU core.
As per documentation
The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs.
There is no mention that specific processor will be given to process exclusively.
Can setting the cpu affinity in linux for a multithreaded program where each thread runs on each core effectively block any other process from being scheduled by the os on that core. Effectively I want to guarantee the use of a core to my process and have all other non critical programs bound to a minimal number of cores.
Or am I missing something with the linux scheduler, or maybe I need my own.
Can setting the cpu affinity in linux for a multithreaded program
where each thread runs on each core effectively block any other
process from being scheduled by the os on that core
No, setting the cpu affinity prevents the scheduler from using some cores for your threads. That is, it will only schedule your threads on certain cores - it doesn't do anything to other threads.
You can probably achieve what you want using setpriority. If your requirements are that stringent, you might look into sched_setscheduler and choose SCHED_RR or SCHED_FIFO.
When the scheduler is actively involved, taskset and nice will only give a hint to the scheduler about your preferences. The scheduler is free to reschedule any of the threads on any of the available cores based on the workload. You can use perf to monitor context switches and cpu migrations.
You have two options:
You can force the scheduler to follow your orders trough sched_setscheduler as user417896 sugggested.
You can use cgroups/cpuset to define two cpusets,say system and isolated, and isolate the target cores by moving all the system threads to system cpuset and run your program using cgexec on the isolated cpuset. You can assign cores and memory to a cpuset and in order to isolate it set cpu_exclusive bit and you are all set. You an also use cset (http://code.google.com/p/cpuset/) if you are using older kernels to automate this process for you.
I hope it helps.