I have Linphone open source application that uses x264 encoder. By default it runs on one thread:
x264_param_t *params= .....
params->i_threads=1;
I added ability to use all processors:
long num_cpu=1;
SYSTEM_INFO sysinfo;
GetSystemInfo( &sysinfo );
num_cpu = sysinfo.dwNumberOfProcessors;
params->i_threads=num_cpu;
The question is how do I know that during video streaming x264 runs on (in my case) 4 processors?
Because from Task Manager -> Performance -> CPU usage history doesn't clear.
I use windows 7
Thanks,
There are three easy to see indications that encoding leverages multiple cores:
Encoding runs faster
Per core CPU load indicates simultaneous load on several cores/processors
Per thread CPU load of your application shows relevant load on multiple threads
Also, you can use processor affinity mask (programmatically, and via Task Manager) to limit the application to single CPU. If x264 is using multiple processors, setting the mask will seriously affect application performance.
In windows task manager, be sure to select View -> CPU History -> One Graph Per CPU. If it still does not look like all processor cores are running at full speed, then possibly some resource is starving the encoding threads, and there's a bottleneck feeding data into the encoder.
Related
Setup as follows: A winform app, visual studio 2019, create 16 videoview/mediaplayer instances, each streaming a 960 X 540 30fps camera stream from a multicasting camera.
CPU i7 2.67GHz, GPU NV GTX 1650.
The GPU is loading up to 44% decode and about the same for 3d. The application uses an amazing 75 to 90% of the CPU. It jumps around a lot from one test run to another. The GPU is very stable.
Here's some other information that is interesting. If I run a single copy of this application with one video stream the CPU use is about 5/10% of CPU. If I run 16 instances of the application each instance uses about 4/10 to 8/10% of the CPU. Once I have 16 videos streaming the GPU is same as above (44%) the CPU is nominal.
The increase of CPU usage within one instance while adding cameras is not linear it takes a big jump after 9.
From the diagnostic image below you can see the usage is isolated almost entirely in the Native code. Other diagrams show about 2/3 in the kernel and 1/3 in system IO. The CPU is spread across all the cores pretty evenly.
code on gist
I have tried a lot of variations on this but no matter what I try the CPU usage is pretty constant once I get up to 16 channels. I have tried running each instance within its own thread. That made no difference. I really would like to understand this and find a way to reduce CPU usage. I have an application that uses this tech and a customer that requires even more channels than 16.
It may be a bug, which would need to be reported on trac.videolan.org with a minimal C/C++ reproduction sample for the VLC developers.
Do note that comparing 16 VLC app instances (16 processes) playing and 1 LibVLC-based app instance playing 16 streams (1 process) is not exactly a fair comparison.
The perf usage should still be linear and not exponential, though, so maybe there is a bug.
I have a C program (graphics benchamrk) that runs on a MIPS processor simulator(I'm looking to graph some performance characteristics). The processor has 8 cores but it seems like core 0 is executing more than its fair share of instructions. The benchmark is multithreaded with the work exactly distributed between the threads. Why could it be that core 0 happens to run about between 1/4 and half the instructions even though it is multithreaded on a 8 core processor?
What are some possible reasons this could be happening?
Most application workloads involve some number of system calls, which could block (e.g. for I/O). It's likely that your threads spend some amount of time blocked, and the scheduler simply runs them on the first available core. In an extreme case, if you have N threads but each is able to do work only 1/N of the time, a single core is sufficient to service the entire workload.
You could use pthread_setaffinity_np to assign each thread to a specific core, then see what happens.
You did not mention which OS you are using.
However, most of the code in most OSs is still written for a single core CPU.
Therefore, the OS will not try to evenly distribute the processes over the array of cores.
When there are multiple cores available, most OSs start a process on the first core that is available (and a blocked process leaves the related core available.)
As an example, on my system (a 4 core amd-64) running ubuntu linux 14.04, the CPUs are usually less than 1 percent busy, So everything could run on a single core.
There must be lots of applications running like videos and background long running applications, with several windows open to show much real activity on other than the first core.
I am preparing for a certification and trying to know the difference between processor affinity and I/O affinity. Would be thankful if someone could explain that to me in simple words.. Tried to learn about it on MS links, but got confused. Many thanks
Well I am no DBA but as far I understand, SQL Server runs on multiple thread (spawns multiple thread for serving request) being a multi threaded application.
You can specify/Map a particular thread(s) to work on specific CPU(s) (since high end server machines will run on 16 or more CPU). that is known as Processor Affinity.
Whereas, The affinity I/O mask or I/O Affinity option binds SQL Server disk I/O to a specified subset of CPU(s)
From MSDN Documentation, specific excerpt
To carry out multitasking, Microsoft Windows 2000 and Windows Server
2003 sometimes move process threads among different processors.
Although efficient from an operating system point of view, this
activity can reduce Microsoft SQL Server performance under heavy
system loads, as each processor cache is repeatedly reloaded with
data. Assigning processors to specific threads can improve performance
under these conditions by eliminating processor reloads; such an
association between a thread and a processor is called processor
affinity.
SQL Server supports processor affinity by means of two affinity mask
options: affinity mask (also known as CPU affinity mask) and
affinity I/O mask
I am thinking about an idea , where a lagacy application needing To run on full performance on Core i7 cpu. Is there any linux software / utility to combine all cores for that application, so it can process at some higher performance than using only 1 core?
the application is readpst and it only uses 1 Core for Processing outlook PST files.
Its ok if i can't use all cores , it will be fine if can use like 3 cores.
Possible? or am i drunk?
I will rewrite it to use multiple cores if my C knowledge on multi forking is good.
Intel Nehalem-based CPUs (i7, i5, i3) already do this to an extent.
By using their Turbo Boost mode, when a single core is being used it is automatically over-clocked until the power and temperature limits are reached.
The newer versions of the i7 (the 2K chips) do this even better.
Read this, and this.
"Possible? or am i drunk?"
You're drunk! If this was easy in the general case, Intel would have built it into the processors by now!
What you're looking for is called 'Single System Image' or SSI. There is scant information on the internet about people doing such a thing, as it tends to be reserved for super computing (and perhaps servers).
http://en.wikipedia.org/wiki/Single_system_image
No, the application needs to be multi-threaded to use more than one core. You're of course free to write a multi-threaded version of that application if you wish, but it may not be easy to make sure the different threads don't mess each other up.
If you want it to alleviate multiple cores then you could write a multi-threaded version of your program. But only in the case that it is actually parallelizable. You said you were reading from pst-files, take care not to run into IO bottlenecks.
A great library for working with threads, mutex, semaphores and so on is POSIX Threads.
There is'nt available such an application, but it is possible.
When a OS will run in a VM, then the hypervisor could make use of a few CPUs to identify which CPU code could run parallel, and are not required to run sequentially, and then they could be actually done with a few other CPUs at once,
In the next second when the Operating CPUs are idle (because they finished their work faster then the menager can provide them with new they can start calculating the next second of instructions.
The reason why we need to do this on the Hypervisor level, and not within the OS, is because of memory locking this wouldnt be possible.
What would happen if there are four concurrent CUDA Applications competing for resources in one single GPU
so they can offload the work to the graphic card?. The Cuda Programming Guide 3.1 mentions that there
are certain methods which are asynchronous:
Kernel launches
Device device memory copies
Host device memory copies of a memory block of 64 KB or less
Memory copies performed by functions that are suffixed with Async
Memory set function calls
As well it mentions that devices with compute capability 2.0 are able to execute multiple kernels concurrently as long as the kernels belong to the same context.
Does this type of concurrency just apply to streams within a single cuda applications but not possible when there are complete different applications requesting GPU resources??
Does that mean that the concurrent support is just available within 1 application (context???) and that the 4 applications will just run concurrent in the way that the methods might be overlaped by context switching in the CPU but the 4 applications need to wait until the GPU is freed by the other applications? (i.e Kernel launch from app4 waits until a kernel launch from app1 finishes..)
If that is the case, how these 4 applications might access GPU resources without suffering long waiting times?
As you said only one "context" can occupy each of the engines at any given time. This means that one of the copy engines can be serving a memcpy for application A, the other a memcpy for application B, and the compute engine can be executing a kernel for application C (for example).
An application can actually have multiple contexts, but no two applications can share the same context (although threads within an application can share a context).
Any application that schedules work to run on the GPU (i.e. a memcpy or a kernel launch) can schedule the work asynchronously so that the application is free to go ahead and do some other work on the CPU and it can schedule any number of tasks to run on the GPU.
Note that it is also possible to put the GPUs in exclusive mode whereby only one context can operate on the GPU at any time (i.e. all the resources are reserved for the context until the context is destroyed). The default is shared mode.