Performance changes significantly every time I rerun program - c

I have a Rasberry PI like hardware which has a basic linux operating system and only one program running called "Main_Prog".
Every time I run a performance test again Main_Prog I get a less than 1% performance fluctuation. This is perfectly acceptable.
When I kill Main_Prog using the kill command, and re-start Main_Prog, the performance changes up to 8%. Further performance tests will vary less than 1% around this fluctuation.
Example
So for example if Main_Prog at first ran at 100 calls/sec and varied between 99-101 calls/sec.
I then did a "kill" command against Main_Prog and restarted using "./Main_Prog &". I then run a performance test and now Main_Prog is running 105 calls/sec with 104-106 calls/sec fluctuation. It will continue to run 104-106 calls/sec until I kill the Main_Prog and start it.
Any idea how to prevent fluctuation or what is happening? Remember, it is VERY consistent. No other programs running on operating system.

Your temporary fluctuation might be related to the page cache. I would not bother (the change is insignificant). See also http://www.linuxatemyram.com/
You might prefill the page cache, e.g. by running some wc Main_Prog before running ./Main_Prog
And you probably still do have some other executable programs & processes on your Raspberry Pi (check with top or ps auxw). I guess that /sbin/init is still running at pid 1. And probably your shell is running too.
It is quite unusual to have a Linux system with only one process. To get that, you should replace /sbin/init with your program, and I really don't recommend that, especially if you don't know Linux very well.
Since there are several processes running in your box, and because the kernel scheduler is preempting tasks at arbitrary moment, its behavior is not entirely reproducible, and that explains the observed fluctuation.
Read also more about real-time scheduling, setpriority(2), sched_setscheduler(2), pthread_setschedparam(3), readahead(2), mlock(2), madvise(2), posix_fadvise(2)
If you are mostly interested in benchmarking, the sensible way is to repeat the same benchmark several times (e.g. 4 to 15 times) and either take the minimum, or the maximum, or the average.

Related

Why does one core execute more than its share of instructions?

I have a C program (graphics benchamrk) that runs on a MIPS processor simulator(I'm looking to graph some performance characteristics). The processor has 8 cores but it seems like core 0 is executing more than its fair share of instructions. The benchmark is multithreaded with the work exactly distributed between the threads. Why could it be that core 0 happens to run about between 1/4 and half the instructions even though it is multithreaded on a 8 core processor?
What are some possible reasons this could be happening?
Most application workloads involve some number of system calls, which could block (e.g. for I/O). It's likely that your threads spend some amount of time blocked, and the scheduler simply runs them on the first available core. In an extreme case, if you have N threads but each is able to do work only 1/N of the time, a single core is sufficient to service the entire workload.
You could use pthread_setaffinity_np to assign each thread to a specific core, then see what happens.
You did not mention which OS you are using.
However, most of the code in most OSs is still written for a single core CPU.
Therefore, the OS will not try to evenly distribute the processes over the array of cores.
When there are multiple cores available, most OSs start a process on the first core that is available (and a blocked process leaves the related core available.)
As an example, on my system (a 4 core amd-64) running ubuntu linux 14.04, the CPUs are usually less than 1 percent busy, So everything could run on a single core.
There must be lots of applications running like videos and background long running applications, with several windows open to show much real activity on other than the first core.

Programming a relatively large, threaded application for old systems

Today my boss and I were having a discussion about some code I had written. My code downloads 3 files from a given HTTP/HTTPS link. I had multi-threaded the download so that all 3 files are downloading simultaneously in 3 separate threads. During this discussion, my boss tells me that the code is going to be shipped to people who will most likely be running old hardware and software (I'm talking Windows 2000).
Until this time, I had never considered how a threaded application would scale on older hardware. I realize that if the CPU has only 1 core, threads are useless and may even worsen performance. I have been wondering if this download task is an I/O operation. Meaning, if an API is blocked waiting for information from the HTTP/HTTPS server, will another thread that wants to do some calculation be scheduled meanwhile? Do older OSes do such scheduling?
Another thing he said: Since the code is going to be run on old machines, my application should not eat the CPU. He said use Sleep() calls after CPU intensive tasks to allow other programs some breathing space. Now I was always under the impression that using Sleep() is terrible in any program. Am I wrong? When is using Sleep() justified?
Thanks for looking!
I have been wondering if this download task is an I/O operation.
Meaning, if an API is blocked waiting for information from the
HTTP/HTTPS server, will another thread that wants to do some
calculation be scheduled meanwhile? Do older OSes do such scheduling?
Yes they do. That's the joke of having blocked IO. The thread is suspended and other calculations (threads) take place until an event wakes up the blocked thread. That's why it makes completely sense to split it up into threads even for single core machines instead of doing some poor man scheduling between the downloads yourself in a single thread.
Of course your downloads affect each other regarding bandwith, so threading won't help to speedup the download :-)
Another thing he said: Since the code is going to be run on old
machines, my application should not eat the CPU. He said use Sleep()
calls after CPU intensive tasks to allow other programs some breathing
space.
Actually using sleep AFTER the task finished won't help here. Doing Sleep after a certain time of calculation (doing sort of time slicing) before going on with the calculation could help. But this is only true for cooperative systems (e.g. like Windows 3.11). This does not play a role for preemptive systems where the scheduler uses time slicing to allocate calculation time to threads. Here it would be more important to think about lowering the priority for CPU intensive tasks in order to give other tasks precedence...
Now I was always under the impression that using Sleep() is terrible
in any program. Am I wrong? When is using Sleep() justified?
This really depends on what you are doing. If you implement sort of busy waiting for a certain flag being set which is set maybe after few seconds it's better to recheck if it's set after going to sleep for a while in order to give up your scheduled time slice instead of just buring CPU power with checking for a flag never being set.
In modern systems there is no sense in introducing Sleep in a calculation as it will only slow down your calculation.
Scheduling is subject to the OS's scheduler. He's the one with the "big picture". In my opinion every approach to "do it better" is only valid inside the scope of a specific application where you have the overview over certain relationships that are not obvious to the scheduler.
Addendum:
I did some research and found that Windows supports preemptive multitasking from Windows 95. The Windows NT-line (where Windows 2000 belongs to) always supported preemptive multitasking.

Why a for(;;){} doesn't crash my computer anymore?

A few years ago, when I was on my degree, my teacher told me that if I make a infinite loop in C it would crash my computer making it to use all processor resources with nothing and I need to reboot my system to make things good again. Today I tested the same situation on my Windows Seven computer and I saw that my computer didn't crashed and my processor resources were just "idle". What changes from 5 years ago until today to change this specific event?
An infinite loop will only "crash" the OS if the OS doesn't support preemptive multitasking. In any decent OS the scheduler will make that process take a break once in a while and allow other stuff to run.
At any rate, if the resource usage is low, look at the generated code - the compiler might have done something smart like optimizing the whole thing away.
Your teacher told you something that wasn't true to begin with, so it isn't surprising that it doesn't happen.
At most, an infinite loop will make your CPU go to 100% but on any modern operating system other processes will still get time slices and you can easily kill it. An OS would not be of much use if a simple mistake by a programmer made the whole machine hang so easily.
Multi-core processors are in common use now unlike 8 years ago, which means that a single inflooping process would only tie up a single core nowadays and leave the rest of the cores free to do other work. Even so, you'd have to be running a pretty lousy operating system to allow a single busy looping process to tie up the whole system.
Windows has had a preemptive multitasker since W95. Even on a single-CPU box, one looper thread would still leave the box useable, (though slower), certainly useable enough to shut down the offending process in the usual way or start the task manager and kill off the process.
To truly bork you box, raise the thread and process priorities to real-time and create as many threads as there are cores, (save your work).

What is the best way to avoid overloading a parallel file-system when running embarrassingly parallel jobs?

We have a problem which is embarrassingly parallel - we run a large number of instances of a single program with a different data set for each; we do this simply by submitting the application many times to the batch queue with different parameters each time.
However with a large number of jobs, not all of them complete. It does not appear to be a problem in the queue - all of the jobs are started.
The issue appears to be that with a large number of instances of the application running, lots of jobs finish at roughly the same time and thus all try to write out their data to the parallel file-system at pretty much the same time.
The issue then seems to be that either the program is unable to write to the file-system and crashes in some manner, or just sits there waiting to write and the batch queue system kills the job after it's been sat waiting too long. (From what I have gathered on the problem, most of the jobs that fail to complete, if not all, do not leave core files)
What is the best way to schedule disk-writes to avoid this problem? I mention our program is embarrassingly parallel to highlight the fact the each process is not aware of the others - they cannot talk to each other to schedule their writes in some manner.
Although I have the source-code for the program, we'd like to solve the problem without having to modify this if possible as we don't maintain or develop it (plus most of the comments are in Italian).
I have had some thoughts on the matter:
Each job write to the local (scratch) disk of the node at first. We can then run another job which checks every now and then what jobs have completed and moves the files from the local disks to the parallel file-system.
Use an MPI wrapper around the program in master/slave system, where the master manages a queue of jobs and farms these off to each slave; and the slave wrapper runs the applications and catches the exception (could I do this reliably for a file-system timeout in C++, or possibly Java?), and sends a message back to the master to re-run the job
In the meantime I need to pester my supervisors for more information on the error itself - I've never run into it personally, but I haven't had to use the program for a very large number of datasets (yet).
In case it's useful: we run Solaris on our HPC system with the SGE (Sun GridEngine) batch queue system. The file-system is NFS4, and the storage servers also run Solaris. The HPC nodes and storage servers communicate over fibre channel links.
Most parallel file systems, particularly those at supercomputing centres, are targetted for HPC applications, rather than serial-farm type stuff. As a result, they're painstakingly optimized for bandwidth, not for IOPs (I/O operations per sec) - that is, they are aimed at big (1000+ process) jobs writing a handful of mammoth files, rather than zillions of little jobs outputting octillions of tiny little files. It is all to easy for users to run something that runs fine(ish) on their desktop and naively scale up to hundreds of simultaneous jobs to starve the system of IOPs, hanging their jobs and typically others on the same systems.
The main thing you can do here is aggregate, aggregate, aggregate. It would be best if you could tell us where you're running so we can get more information on the system. But some tried-and-true strategies:
If you are outputting many files per job, change your output strategy so that each job writes out one file which contains all the others. If you have local ramdisk, you can do something as simple as writing them to ramdisk, then tar-gzing them out to the real filesystem.
Write in binary, not in ascii. Big data never goes in ascii. Binary formats are ~10x faster to write, somewhat smaller, and you can write big chunks at a time rather than a few numbers in a loop, which leads to:
Big writes are better than little writes. Every IO operation is something the file system has to do. Make few, big, writes rather than looping over tiny writes.
Similarly, don't write in formats which require you to seek around to write in different parts of the file at different times. Seeks are slow and useless.
If you're running many jobs on a node, you can use the same ramdisk trick as above (or local disk) to tar up all the jobs' outputs and send them all out to the parallel file system at once.
The above suggestions will benefit the I/O performance of your code everywhere, not juston parallel file systems. IO is slow everywhere, and the more you can do in memory and the fewer actual IO operations you execute, the faster it will go. Some systems may be more sensitive than others, so you may not notice it so much on your laptop, but it will help.
Similarly, having fewer big files rather than many small files will speed up everything from directory listings to backups on your filesystem; it is good all around.
It is hard to decide if you don't know what exactly causes the crash. If you think it is an error related to the filesystem performance, you can try an distributed filesystem: http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html
If you want to implement Master/Slave system, maybe Hadoop can be the answer.
But first of all I would try to find out what causes the crash...
OSes don't alway behave nicely when they run out of resources; sometimes they simply abort the process that asks for the first unit of resource the OS can't provide. Many OSes have file handle resource limits (Windows I think has a several-thousand handle resource, which you can bump up against in circumstances like yours), and failure to find a free handle usually means the OS does bad things to the requesting process.
One simple solution requiring a program change, is to agree that no more than N of your many jobs can be writing at once. You'll need a shared semaphore that all jobs can see; most OSes will provide you with facilities for one, often as a named resource (!). Initialize the semaphore to N before you launch any job.
Have each writing job acquire a resource unit from the semaphore when the job is about to write, and release that resource unit when it is done. The amount of code to accomplish this should be a handful of lines inserted once into your highly parallel application. Then you tune N until you no longer have the problem. N==1 will surely solve it, and you can presumably do lots better than that.

Getting CPU time in OS X

I have an objective-c application for OS X that compares two sqlite DB's and produces a diff in json format. The db are quite large (10,000 items with many fields). Sometimes this applications runs in about 55 sec(using 95% of the cpu). Sometimes it takes around 8 min (using 12% of the cpu). This is with the same DB's. When it is only using a small portion of the cpu the rest is available. There does not appear to be anything taking priority over the process. Adding "nice -20" on the command seems to assure I get the cpu usage. My questions are
If nothing else is using the cpu why
does my app not take advantage of
it?
Is there something I can do
programatically to change this?
Is there something I can do to OS X to
change this?
Question 1:
Since, I assume, you have to read in the databases from disk, you aren't making full use of the CPU because your code is blocking on disk reads. On Mac OS X there is a lot of stuff running in the background that doesn't use a lot of CPU time but does send out a lot of disk reads, like Spotlight.
Question 2:
Probably not, other than make the most efficient use of disk access possible.
Question 3:
Shut down any other processes that are accessing the disk. This includes many system processes that you really shouldn't shut down, so I don't think there's much you can do here other than try running it on Darwin without all the Mac OS X fanciness.
It sounds like you're IO bound in the long cases. Are you doing anything else on the machine? The CPU isn't throttling itself - it's definitely waiting for something.
You can use some of the developer tools to look at your app while it's running - perhaps most useful would be "Instruments", which is a GUI on top of dtrace. You should have this installed if you're running the most recent Xcode. You can also use Shark, which is somewhat easier to use at first glance, but less informative in the long run.
Usually you get all the performance that's available. If the CPU is not at 100% there's something blocking it. In case of databases it's often locking. Use Shark to find out what's going on in your application.
When your program uses little CPU, probably because it is waiting for disk, especially when other processes are accessing to the disk at the same time. Another possibility is your program uses too much memory and the OS begins to use swap space.

Resources