This is a follow up to Why is my OpenMP implementation slower than a single threaded implementation? .
I have adhered to the answer provided, and used tasking instead of for pragmas to speed up the code. However, compared to a sequential (same) program, both programs run equally as fast. I witness no speed up.
The reworked code is here: http://pastebin.com/3SFaNEc4
I simply removed all the for pragmas and replaced it tasking pragmas for the recursive procedures.
Am I doing anything wrong? I should be seeing an almost linear speed up. What do you guys think?
Thanks!
First - you still have an "#pragma end critical" which should be removed. It isn't causing a problem, but it is incorrect. Second - as I said in the other question you posted, you might have to think about how you are parallelizing the code to see the speedup, so just replacing the other pragmas with task pragmas may not speed it up. Third - you haven't put the tasks into a parallel region, so you are not running in parallel at all. And you can't just add a parallel region around the tasks or you are going to be doing the same tasks multiple times.
Related
I am constructing the partial derivative of a function in C. The process is mainly consisted of a large number of small loops. Each loop is responsible for filling a column of the matrix. Because the size of the matrix is huge, the code should be written efficiently. I have a number of plans in mind for the implementation which I do not want get into the details.
I know that the smart compilers try to take advantage of the cache automatically. But I would like to know more the details of using cache and writing an efficient code and efficient loops. It is appreciated if provide with some resources or websites so I can know more about writing the efficient codes in terms of reducing memory access time and taking advantage guy.
I know that my request my look sloppy, but I am not a computer guy. I did some research but with no success.
So, any help is appreciated.
Thanks
Well written code tends to be efficient (though not always optimal). Start by writing good clean code, and if you actually have a performance problem that can be isolated and addressed.
It is probably best that you write the code in the most readable and understandable way you can and then profile it to see where the bottlenecks really are. Often times your conception of where you need efficiency doesn't match up with reality.
Modern compilers do a decent job with many aspects of optimization and it seems unlikely that the process of looping will itself be a problem. Perhaps you should consider focusing on simplifying the calculation done by each loop.
Otherwise, you'll be looking at things such as accessing your matrix row by row so that you take advantage of the row-major storage order C uses (see this question).
You'll want to build your for loops without if statements inside because if statements create what is called "branching". The computer essentially guesses which option will be right and pays a sometimes hefty option if it is wrong.
To extend that theme, you want to do as little inside the for loop as possible. You'll also want to define it with static limits, e.g.:
for(int i=1;i<100;i++) //This is better than
for(int i=1;i<N/i;i++) //this
Static limits means that very little effort is expended determining if the for loop should keep going. They also permit you to use OpenMP to divy up the work in the loops, which can sometimes speed things up considerably. This is simple to do:
#pragma omp parallel for
for(int i=0;i<100;i++)
And, walla! the code is parallelized.
I'm coding a little program that has to sort a large array (up to 4 million text strings). Seems like I'm doing quite well at it, since a combination of radixsort and mergesort already cut the original q(uick)sort execution time in less than half.
Execution time being the main point, since this is what I'm using to benchmark my piece of code.
My question is:
Is there a better (i. e. more reliable) way of benchmarking a program than just time the execution? It kinda works, but the same program (with the same background processes running) usually has slightly different execution times if run twice.
This kinda defeats the purpose of detecting small improvements. And several small improvements could add up to a big one...
Thanks in advance for any input!
Results:
I managed to get gprof to work under Windows (using gcc and MinGW). gcc behaves poorly (considering execution time) compared to my normal compiler (tcc), but it gave me quite some insight.
Try a profiling tool, that will also show you where the program is spending its time. gprof is the classic C profiling tool, at least on Unix.
Look at the time command. It tracks both the CPU time a process uses and the wall-clock time. You can also use something like gprof for profiling your code to find the parts of your program that are actually taking the most time. You could do a lower-tech version of profiling with timers in your code. Boost has a nice timer class, but it's easy to roll your own.
I don't think it's sufficient to just measure how long a piece of code takes to execute. Your environment is a constantly changing thing, so you have to take a statistical approach to measuring execution time.
Essentially you need to take N measurements, discard outliers, and calculate your average, median and standard deviation running time, with an uncertainty measurement.
Here's a good blog explaining why and how to do this (with code): http://blogs.perl.org/users/steffen_mueller/2010/09/your-benchmarks-suck.html
What do you use for timing execution time so far? There's C89 clock() in time.h for starters. On unixoid systems you might find getitimer() for ITIMER_VIRTUAL to measure process CPU time. See the respective manual pages for details.
You can also use a POSIX shell's times utility to benchmark the processor time used by a process and its children. The resolution is system dependent, like just anything about profiling. Try to wrap your C code in a loop, executing it as many times as necessary to reduce the "jitter" in the time the benchmarking reports.
Call your routine from a test harness, whereby it executes N + 1 times. Ignore the timing for the first iteration and then take the average of iterations 1..N. The reason for ignoring the first time is that is is often slightly inflated due to various effects, e.g. virtual memory, code being paged in, etc. The reason for averaging N iterations is that you get rid of artefacts caused by other processes, the scheduler, etc.
If you're running on Linux or similar You might also want to use taskset to pin your code to a specific CPU core (assuming it's single-threaded), ideally not core 0, since this tends to handle all interrupts.
I am learning about OpenMP concurrency, and tried my hand at some existing code I have. In this code, I tried to make all the for loops parallel. However, this seems to make the program MUCH slower, at least 10x slower, or even more than the single threaded version.
Here is the code: http://pastebin.com/zyLzuWU2
I also used pthreads, which turns out to be faster than the single threaded version.
Now the question is, what am I doing wrong in my OpenMP implementation that is causing this slowdown?
Thanks!
edit: the single threaded version is just the one without all the #pragmas
One problem I see with your code is that you are using OpenMP across loops that are very small (8 or 64 iterations, for example). This will not be efficient due to overheads. If you want to use OpenMP for the n-queens problem, look at OpenMP 3.0 tasks and thread parallelism for branch-and-bound problems.
I think your code is much too complex to be reviewed here. One error that I saw immediately is that it is not even correct. At places where you are using an omp parallel for to do sums you must use reduction(+: yourcountervariable) to have the results of the different threads correctly assembled together. Otherwise one thread may overwrite the result of the others.
At least two reasons:
You're only doing 8 iterations of a very simple loop. Your runtime will be completely dominated by the overhead involved in setting up all the threads.
In some places, the critical section will cause contention; all the threads will be trying to access the critical section continuously, and block each other.
I try to find articles, books or anything about programming without jumps (x86 arch). I know that generally it is impossible but I try to avoid jumps but gcc even with inline func uses jumps many times. Coding only in Assembly is some sort of solution, but writing equivalent of 1000 lines in C is like hell party to my eyes..
Unless your jumps are really random, branch prediction should eliminate most of overhead involved.
I would dedicate more effort to optimizing memory access patterns in order to improve locality and reduce cache misses. These days, memory latency is the major bottleneck to performance.
Another good direction is improving parallelism (using both vectorized SIMD instructions and, if possible, more than one core).
Optimize only performance critical code, and only once you really know it is performance critical. Do not try to optimize jumps only because you read they case a performance hit. Everything causes a performance hit, and the fastest possible code is the code which does nothing. There are other things much worse than jumps.
If you will show a particular example of a jump in the generated code, chance is there will be some way to avoid it, but it is more likely the code you will show will still contain more serious issues.
One particular way how to avoid branches is to use "conditional move" instructions. They can be used e.g. to compute max or min. If you allow the compiler to use SSE architecture, it assumes the CPU also supports CMOV/FCOMI/FCOMIP/FUCOMI/FUCOMIP instructions and will use them (beware: sometimes it may be tricky to make the compiler to do what you want, see e.g. this gamedev.net discussion).
I think you may mean branching. In C there are bit twiddling tricks to use to speed up certain operations
See bit hacks:
http://www-graphics.stanford.edu/~seander/bithacks.html
It is not impossible to code without jumps but it seems pointless to try.
In the end if you need to do something more than once then your choices are:
Loop unrolling (i.e. repeating the code instead of looping).
Somehow get the instruction pointer to visit the same code more than once.
The first approach requiers knowing the number of iterations in advance and doesn't scale and the second involves some sort of jump.
Not knowing what your code looks like, it's hard to give any advice. But I will give it a try.
Before you start optimizing, run a profiling tool to locate the problem areas. After optimizing, run the profiling tool again to see if you actually made it faster.
It's hard to actually remove branches, but you can minimize them by doing loop unrolling.
Someone mentioned conditional move instructions, there's plenty of conditional instructions on the ARM architecture, but if they're not executed they will translate to a NOP and take one cycle each. Not sure how they work on x86. It might actually get slower then using a simple branch depending on how long the pipeline is.
There's a lot of other optimizing tricks you could try before removing branches.
I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module.
That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times.
I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops?
The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better.
Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated.
-zaplec
TRY IT AND SEE!!
It'll almost certainly make a difference. Function call overhead isn't usually that much of an issue, but at over 100K repetitions it starts to add up.
...But whether or not it makes any real-world difference is something only you can answer, after trying it and timing the results.
As for for vs while... it shouldn't matter unless you actually change the behavior when changing the loop. If in doubt, make your compiler spit out assembler code for both and compare... or just change it and time it.
You need to be careful in the optimizations you make because you aren't always clear on which optimizations the compiler is making for you. Pre-optimization is a common mistake people make. Is it important that your code is readable and easily maintained or slightly faster? Like others have suggested, the best approach is to benchmark the different ways and see if there is a noticeable difference.
If you don't believe your compiler does much in the way of optimization I would look at some older concepts in optimizing C (searches on SO or google should provide some good links).
The ARM processor has an instruction pipeline (cache). When the processor encounters a branch (call) instruction, it must clear the pipeline and reload, thus wasting some time. One objective when optimizing for speed is to reduce the number of reloads to the instruction pipeline. This means reducing branch instructions.
As others have stated in SO, compile your code with optimization set for speed, and profile. I prefer to look at the assembly language listing as well (either printed from the compiler or displayed interwoven in the debugger). Use this as a baseline. If you can't profile, you can use assembly instruction counting as a rough estimate.
The next step is to reduce the number of branches; or the number times a branch is taken. Unrolling loops helps to reduce the number of times a branch is taken. Inlining helps reduce the number of branches. Before applying this fine-tuning techniques, review the design and code implementation to see if branches can be reduced. For example, reduce the number of "if" statements by using Boolean arithmetic or using Karnaugh Maps. My favorite is reducing requirements and eliminating code that doesn't need to be executed.
In the code implementation, move code that doesn't change outside of the for or while loops. Some loops may be reduce to equations (example, replacing a loop of additions with a multiplication). Also, reduce the quantity of iterations, by asking "does this loop really need to be executed this many times").
Another technique is to optimize for Data Oriented Design. Also check this reference.
Just remember to set a limit for optimizing. This is where you decide any more optimization is not generating any ROI or customer satisfaction. Also, apply optimizations in stages; which will allow you to have a deliverable when your manager asks for one.
Run a profiler on your code. If you are just guessing at where you are spending your time, you are probably wrong. A profiler will show what function is taking the most time and you can focus on that. You could be doing something in the function that takes longer than the function call itself. Did you look to see if you can change floating operations to integer, or integer math to shifts? You can spend a lot of time fiddling with things that don't make much difference. Run a profiler on your code and know for sure that the things you are changing will make a difference.
For function vs. inline, unfortunately there is no easy answer. I.e. it depends. See this FAQ. For "for" vs. "while", I wouldn't think there is any significant difference in performance.
In general, a function call should have more overhead than inlining. You really should profile however, as this can be affected quite a bit by your compiler (especially the compile/optimization settings). Some compilers will automatically inline code for example.