Lost/confused in optimizing - c

I just completed a computer graphics course, where we had to program a ray tracer. Though all the results were correct, I was confused about the use of OpenMP (which BTW was not part of the course). I have this loop (C++):
#pragma omp parallel for private(L, ray)
// for (x = x_from; x < x_till; x++) {
// printf("Col: %5d\n", x);
// for (y = y_from; y < y_till; y++) {
for (int xy = 0; xy < xy_range; xy++) {
int x = x_from + (xy % x_width);
int y = y_from + (xy / x_width);
ray = cam->get_ray_at(x, y);
L = trace_ray(ray, 0, cam->inter);
#pragma omp critical
cam->set_pixel(x, y, L);
}
// }
}
I tried many configurations. But what finally confuses me the most is that the above version, with a combined, single for, was the least efficient of all (150 seconds vs 120s for separate x and y for. The 'critical' does not noticeably change the timing.
More: though I would expect the single for-loop to parallelize each separate iteration, it doesn't. Using this method, 25 loops are executed as groups of 8 - 8 - 8 - 1 (8 cores). In fact the separate y-loops (commented out in listing) seem to distribute the load more efficiently. Removing the 'for' in 'parallel for' does improve
slightly (148 vs 150s ;)
Also, I tried local vs global definitions (with the necessary private pragmas). I tried to declare L and ray inside the loops. All to no avail...
I'd appreciate suggestions or pointers...
Here are some more precise data:
Single loop Yes No No Yes
'Critical" No No Yes Yes
---------------------- ---------------------- ---------------------- ----------------------
User CPU Mean User CPU Mean User CPU Mean User CPU Mean
Scene 5 37.9 158.9 3.66 26.5 185.5 7.00 27.0 187.7 6.95 38.7 161.8 4.18
Scene 6 18.8 110 5.85 17.7 112 6.32 18.1 113.8 5.29 19.4 112.2 5.78
Scene 7 149 658.8 4.42 114 679.9 5.96 114 653.8 5.73 149 659.8 4.43
Plane 112.0 497.3 4.44 105 520.5 4.95 103.8 525 5.06 113.5 504.8 4.45
5-balls 126 760.2 6.03 162.3 697.5 4.36 170.3 725.3 4.23 127.3 766.5 6.02
'Mean' is CPU/User, which is the mean core occupation. Note that in several cases, mean is only 4.xx.
Solution, and results:
Single loop Yes No
---------------------- ----------------------
User CPU Mean User CPU Mean
Scene 5 23.9 190.1 7.95 24.4 190.7 7.82
Scene 6 14.3 114.2 7.98 14.5 114.9 7.92
Scene 7 85.5 675.9 7.91 106.9 698.8 6.54
Plane 72.7 579.1 7.97 72.6 578.4 7.97
5-balls 104.8 823.3 7.86 103.9 825.1 7.94
This excellent result is obtained by adding schedule(dynamic, 1) to
the #pragma omp parallel for line like this:
#pragma omp parallel for schedule(dynamic, 1)
which see to a run-time load distribution for cores (as opposed to
compile time).
Just one more note, the ', 1' parameter is to limit the size of the
chunks. It can be left out, in which case openmp uses a default
value. Maybe adding 1 made the load distribution too fine-grained,
but I cannot find any performance difference either way in this case .
I guess the raytracing task is too slow and hides any administrative
overhead.

I have written a Whitted sytle ray tracer that operates on the full ray tree (reflection and refraction) in OpenCL. I have not done it with OpenMP yet but that's my next goal. If you want to learn OpenMP I would start with some simpler tasks first. But let me make a few comments.
How are you doing your timing? You wrote "Removing the 'for' in 'parallel for' does improve slightly". That makes no sense. Removing the for is going to run the same code on each thread not distribute the treads to different iterations (do some hello world tests to show this). It should be slower not faster. That makes me wonder how you do the timing. I added some code to show how to do the timing.
You should not have to use critical. If each iteration writes to a different pixel then it should not be necessary. Depending on the complexity of your scene critical would likely make it much slower.
Lastly, to get the best performance you're going to want to use SSE/AVX as well and operate on multiple pixels at once. This can be done though what's called packet based ray tracing. See the following link for a good discussion on this http://graphics.stanford.edu/~boulos/papers/cook_gi07.pdf
Edit: Since each pixel can take different times you want to use schedule(dynamic) rather than schedule(static) which is normally (but not necessarily) the default. See the code.
Ingo Wald's PhD thesis:
http://www.sci.utah.edu/~wald/PhD/
double dtime = omp_get_wtime();
#pragma omp parallel
{
Ray ray;
Color L;
#pragma omp for schedule(dynamic)
for (int xy = 0; xy < xy_range; xy++) {
int x = x_from + (xy % x_width);
int y = y_from + (xy / x_width);
ray = cam->get_ray_at(x, y);
L = trace_ray(ray, 0, cam->inter);
cam->set_pixel(x, y, L);
}
}
dtime = omp_get_wtime() - dtime;
printf("time %f\n", dtime);

Related

CHUNKSIZE in openMP static scheduling

Assume that each loop iteration of my code takes the same time. Please note that each loop iteration involves memory access from disjoint portions of a large contiguous memory. I am using VS2019 compiler.
I thought it should not matter whether I use
#pragma omp for schedule(static, CHUNKSIZE)
OR
#pragma omp for schedule(static)
I have used values like 5 for CHUNKSIZE.
I am asking this because I see the first variation performs slightly better.
Can someone throw some light?
If you do not specify a chunk
#pragma omp for schedule(static)
OpenMP will:
Divide the loop into equal-sized chunks or as equal as possible in the
case where the number of loop iterations is not evenly divisible by
the number of threads multiplied by the chunk size. By default, chunk
size is loop_count/number_of_threads
Hence, for a CHUNKSIZE=5, 2 threads and a loop (to be parallelized) with 22 iterations. To thread ID=0 will be assign the iterations {0 to 10} and to thread ID=1 {11 to 21}. Each thread with 11 iterations. However, for:
#pragma omp for schedule(static, CHUNKSIZE)
to thread ID=0 will be assign the iterations {0 to 4}, {10 to 14} and {20 to 21}, whereas thread ID=1 will work with the iterations {5 to 9} and {15 to 19}. Therefore, to the first and second threads it was assigned 12 and 10 iterations, respectively.
All this to show that having
#pragma omp for schedule(static)
and
#pragma omp for schedule(static, CHUNKSIZE)
is not the same. Different chunk sizes, might affects directly the loading balancing, and cache misses, among others. Even if one:
Assume that each loop iteration of my code takes the same time
Naturally, thinks get more complicated, if each iteration of the loop being parallelized is performing a different among of work. For instance:
for(int i = 0; i < 22; i++)
for(int j = i+1; j < 22 ; i++)
// do the same work.
With
#pragma omp for schedule(static)
Thread ID=0 would execute 176 iterations whereas Thread ID=1 55. With a load unbalance of 176 - 55 = 121.
whereas with
#pragma omp for schedule(static, CHUNKSIZE)
Thread ID=0 would execute 141 iterations and Thread ID=1 90. With a load unbalance of 141 - 90 = 51.
As you can see in this case without the chunk, one thread performed 121 parallel tasks more than the other thread, whereas with a chunk=5, the difference was reduced to 51.
To conclude, it depends on your code, the hardware where that code is being executed, how you are performing the benchmark, how big is the time difference, and so. The bottom line is: you need to analyze it, look for potential loading balancing problems, measure cache misses, and so on. Profiling is always the answer.

OpenMP: No Speedup in parallel workloads

So I can't really figure this bit out with my fairly simple OpenMP parallelized for loop. When running on the same input size, P=1 runs in ~50 seconds, but running P=2 takes almost 300 Seconds, with P=4 running ~250 Seconds.
Here's the parallelized loop
double time = omp_get_wtime();
printf("Input Size: %d\n", n);
#pragma omp parallel for private(i) reduction(+:in)
for(i = 0; i < n; i++) {
double x = (double)(rand() % 10000)/10000;
double y = (double)(rand() % 10000)/10000;
if(inCircle(x, y)) {
in++;
}
}
double ratio = (double)in/(double)n;
double est_pi = ratio * 4.0;
time = omp_get_wtime() - time;
Runtimes:
p=1, n=1073741824 - 52.764 seconds
p=2, n=1073741824 - 301.66 seconds
p=4, n=1073741824 - 274.784 seconds
p=8, n=1073741824 - 188.224 seconds
Running in a Ubuntu 20.04 VM with 8 cores of a Xeon 5650 and 16gb of DDR3 EEC RAM on top of a FreeNas installation on a Dual Xeon 5650 System with 70Gb of RAM.
Partial Solution:
The rand() function inside of the loop causes the time to jump when running on multiple threads.
Since rand() uses state saved from the previous call to generated the next PRN it can't run in multiple threads at the same time. Multiple threads would need to read/write the PRNG state at the same time.
POSIX states that rand() need not be thread safe. This means your code could just not work right. Or the C library might put in a mutex so that only one thread could call rand() at a time. This is what's happening, but it slows the code down considerably. The threads are nearly entirely consumed trying to get access to the rand critical section as nothing else they are doing takes any significant time.
To solve this, try using rand_r(), which does not use shared state, but instead is passed the seed value it should use for state.
Keep in mind that using the same seed for every thread will defeat the purpose of increasing the number of trials in your Monte Carlo simulation. Each thread would just use the exact same pseudo-random sequence. Try something like this:
unsigned int seed;
#pragma omp parallel private(seed)
{
seed = omp_get_thread_num();
#pragma omp for private(i) reduction(+:in)
for(i = 0; i < n; i++) {
double x = (double)(rand_r(&seed) % 10000)/10000;
double y = (double)(rand_r(&seed) % 10000)/10000;
if(inCircle(x, y)) {
in++;
}
}
}
BTW, you might notice your estimate is off. x and y need to be evenly distributed in the range [0, 1], and they are not.

Why doesn't this code scale linearly?

I wrote this SOR solver code. Don't bother too much what this algorithm does, it is not the concern here. But just for the sake of completeness: it may solve a linear system of equations, depending on how well conditioned the system is.
I run it with an ill conditioned 2097152 rows sparce matrix (that never converges), with at most 7 non-zero columns per row.
Translating: the outer do-while loop will perform 10000 iterations (the value I pass as max_iters), the middle for will perform 2097152 iterations, split in chunks of work_line, divided among the OpenMP threads. The innermost for loop will have 7 iterations, except in very few cases (less than 1%) where it can be less.
There is data dependency among the threads in the values of sol array. Each iteration of the middle for updates one element but reads up to 6 other elements of the array. Since SOR is not an exact algorithm, when reading, it can have any of the previous or the current value on that position (if you are familiar with solvers, this is a Gauss-Siedel that tolerates Jacobi behavior on some places for the sake of parallelism).
typedef struct{
size_t size;
unsigned int *col_buffer;
unsigned int *row_jumper;
real *elements;
} Mat;
int work_line;
// Assumes there are no null elements on main diagonal
unsigned int solve(const Mat* matrix, const real *rhs, real *sol, real sor_omega, unsigned int max_iters, real tolerance)
{
real *coefs = matrix->elements;
unsigned int *cols = matrix->col_buffer;
unsigned int *rows = matrix->row_jumper;
int size = matrix->size;
real compl_omega = 1.0 - sor_omega;
unsigned int count = 0;
bool done;
do {
done = true;
#pragma omp parallel shared(done)
{
bool tdone = true;
#pragma omp for nowait schedule(dynamic, work_line)
for(int i = 0; i < size; ++i) {
real new_val = rhs[i];
real diagonal;
real residual;
unsigned int end = rows[i+1];
for(int j = rows[i]; j < end; ++j) {
unsigned int col = cols[j];
if(col != i) {
real tmp;
#pragma omp atomic read
tmp = sol[col];
new_val -= coefs[j] * tmp;
} else {
diagonal = coefs[j];
}
}
residual = fabs(new_val - diagonal * sol[i]);
if(residual > tolerance) {
tdone = false;
}
new_val = sor_omega * new_val / diagonal + compl_omega * sol[i];
#pragma omp atomic write
sol[i] = new_val;
}
#pragma omp atomic update
done &= tdone;
}
} while(++count < max_iters && !done);
return count;
}
As you can see, there is no lock inside the parallel region, so, for what they always teach us, it is the kind of 100% parallel problem. That is not what I see in practice.
All my tests were run on a Intel(R) Xeon(R) CPU E5-2670 v2 # 2.50GHz, 2 processors, 10 cores each, hyper-thread enabled, summing up to 40 logical cores.
On my first set runs, work_line was fixed on 2048, and the number of threads varied from 1 to 40 (40 runs in total). This is the graph with the execution time of each run (seconds x number of threads):
The surprise was the logarithmic curve, so I thought that since the work line was so large, the shared caches were not very well used, so I dug up this virtual file /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size that told me this processor's L1 cache synchronizes updates in groups of 64 bytes (8 doubles in the array sol). So I set the work_line to 8:
Then I thought 8 was too low to avoid NUMA stalls and set work_line to 16:
While running the above, I thought "Who am I to predict what work_line is good? Lets just see...", and scheduled to run every work_line from 8 to 2048, steps of 8 (i.e. every multiple of the cache line, from 1 to 256). The results for 20 and 40 threads (seconds x size of the split of the middle for loop, divided among the threads):
I believe the cases with low work_line suffers badly from cache synchronization, while bigger work_line offers no benefit beyond a certain number of threads (I assume because the memory pathway is the bottleneck). It is very sad that a problem that seems 100% parallel presents such bad behavior on a real machine. So, before I am convinced multi-core systems are a very well sold lie, I am asking you here first:
How can I make this code scale linearly to the number of cores? What am I missing? Is there something in the problem that makes it not as good as it seems at first?
Update
Following suggestions, I tested both with static and dynamic scheduling, but removing the atomics read/write on the array sol. For reference, the blue and orange lines are the same from the previous graph (just up to work_line = 248;). The yellow and green lines are the new ones. For what I could see: static makes a significant difference for low work_line, but after 96 the benefits of dynamic outweighs its overhead, making it faster. The atomic operations makes no difference at all.
The sparse matrix vector multiplication is memory bound (see here) and it could be shown with a simple roofline model. Memory bound problems benefit from higher memory bandwidth of multisocket NUMA systems but only if the data initialisation is done in such a way that the data is distributed among the two NUMA domains. I have some reasons to believe that you are loading the matrix in serial and therefore all its memory is allocated on a single NUMA node. In that case you won't benefit from the double memory bandwidth available on a dual-socket system and it really doesn't matter if you use schedule(dynamic) or schedule(static). What you could do is enable memory interleaving NUMA policy in order to have the memory allocation spread among both NUMA nodes. Thus each thread would end up with 50% local memory access and 50% remote memory access instead of having all threads on the second CPU being hit by 100% remote memory access. The easiest way to enable the policy is by using numactl:
$ OMP_NUM_THREADS=... OMP_PROC_BIND=1 numactl --interleave=all ./program ...
OMP_PROC_BIND=1 enables thread pinning and should improve the performance a bit.
I would also like to point out that this:
done = true;
#pragma omp parallel shared(done)
{
bool tdone = true;
// ...
#pragma omp atomic update
done &= tdone;
}
is a probably a not very efficient re-implementation of:
done = true;
#pragma omp parallel reduction(&:done)
{
// ...
if(residual > tolerance) {
done = false;
}
// ...
}
It won't have a notable performance difference between the two implementations because of the amount of work done in the inner loop, but still it is not a good idea to reimplement existing OpenMP primitives for the sake of portability and readability.
Try running the IPCM (Intel Performance Counter Monitor). You can watch memory bandwidth, and see if it maxes out with more cores. My gut feeling is that you are memory bandwidth limited.
As a quick back of the envelope calculation, I find that uncached read bandwidth is about 10 GB/s on a Xeon. If your clock is 2.5 GHz, that's one 32 bit word per clock cycle. Your inner loop is basically just a multiple-add operation whose cycles you can count on one hand, plus a few cycles for the loop overhead. It doesn't surprise me that after 10 threads, you don't get any performance gain.
Your inner loop has an omp atomic read, and your middle loop has an omp atomic write to a location that could be the same one read by one of the reads. OpenMP is obligated to ensure that atomic writes and reads of the same location are serialized, so in fact it probably does need to introduce a lock, even though there isn't any explicit one.
It might even need to lock the whole sol array unless it can somehow figure out which reads might conflict with which writes, and really, OpenMP processors aren't necessarily all that smart.
No code scales absolutely linearly, but rest assured that there are many codes that do scale much closer to linearly than yours does.
I suspect you are having caching issues. When one thread updates a value in the sol array, it invalids the caches on other CPUs that are storing that same cache line. This forces the caches to be updated, which then leads to the CPUs stalling.
Even if you don't have an explicit mutex lock in your code, you have one shared resource between your processes: the memory and its bus. You don't see this in your code because it is the hardware that takes care of handling all the different requests from the CPUs, but nevertheless, it is a shared resource.
So, whenever one of your processes writes to memory, that memory location will have to be reloaded from main memory by all other processes that use it, and they all have to use the same memory bus to do so. The memory bus saturates, and you have no more performance gain from additional CPU cores that only serve to worsen the situation.

Suggestions on optimizing a Z-buffer implementation?

I'm writing a 3D graphics library as part of a project of mine, and I'm at the point where everything works, but not well enough.
In particular, my main headache is that my pixel fill-rate is horribly slow -- I can't even manage 30 FPS when drawing a triangle that spans half of an 800x600 window on my target machine (which is admittedly an older computer, but it should be able to manage this . . .)
I ran gprof on my executable, and I end up with the following interesting lines:
% cumulative self self total
time seconds seconds calls ms/call ms/call name
43.51 9.50 9.50 vSwap
34.86 17.11 7.61 179944 0.04 0.04 grInterpolateHLine
13.99 20.17 3.06 grClearDepthBuffer
<snip>
0.76 21.78 0.17 624 0.27 12.46 grScanlineFill
The function vSwap is my double-buffer swapping function, and it also performs vsyching, so it makes sense to me that the test program will spend much of its time waiting in there. grScanlineFill is my triangle-drawing function, which creates an edge list and then calls grInterpolateHLine to actually fill in the triangle.
My engine is currently using a Z-buffer to perform hidden surface removal. If we discount the (presumed) vsynch overhead, then it turns out that the test program is spending something like 85% of its execution time either clearing the depth buffer, or writing pixels according to the values in the depth buffer. My depth buffer clearing function is simplicity itself: copy the maximum value of a float into each element. The function grInterpolateHLine is:
void grInterpolateHLine(int x1, int x2, int y, float z, float zstep, int colour) {
for(; x1 <= x2; x1 ++, z += zstep) {
if(z < grDepthBuffer[x1 + y*VIDEO_WIDTH]) {
vSetPixel(x1, y, colour);
grDepthBuffer[x1 + y*VIDEO_WIDTH] = z;
}
}
}
I really don't see how I can improve that, especially considering that vSetPixel is a macro.
My entire stock of ideas for optimization has been whittled down to precisely one:
Use an integer/fixed-point depth buffer.
The problem that I have with integer/fixed-point depth buffers is that interpolation can be very annoying, and I don't actually have a fixed-point number library yet. Any further thoughts out there? Any advice would be most appreciated.
You should have a look at the source code to something like Quake - considering what it could achieve on a Pentium, 15 years ago. Its z-buffer implementation used spans rather than per-pixel (or fragment) depth. Otherwise, you could look at the rasterization code in Mesa.
Hard to really tell what higher order optimizations can be done without seeing the rest of the code. I have a couple of minor observation, though.
There's no need to calculate x1 + y * VIDEO_WIDTH more than once in grInterpolateHLine. i.e.:
void grInterpolateHLine(int x1, int x2, int y, float z, float zstep, int colour) {
int offset = x1 + (y * VIDEO_WIDTH);
for(; x1 <= x2; x1 ++, z += zstep, offset++) {
if(z < grDepthBuffer[offset]) {
vSetPixel(x1, y, colour);
grDepthBuffer[offset] = z;
}
}
}
Likewise, I'm guessing that your vSetPixel does a similar calculation, so you should be able to use the same offset there as well, and then you only need to increment offset and not x1 in each loop iteration. Chances are this can be extended back to the function that calls grInterpolateHLine, and you would then only need to do the multiplication once per triangle.
There are some other things you could do with the depth buffer. Most of the time if the first pixel of the line either fails or passes the depth test, then the rest of the line will have the same result. So after the first test you can write a more efficient assembly block to test the entire line in one shot, then if it passes you can use a more efficient block memory setter to block-set the pixel and depth values instead of doing them one at a time. You would only need to test/set per pixel if the line is only partially occluded.
Also, not sure what you mean by older computer, but if your target computer is multi-core then you can break it up among multiple cores. You can do this for the buffer clearing function as well. It can help quite a bit.
I ended up solving this by replacing the Z-buffer with the Painter's Algorithm. I used SSE to write a Z-buffer implementation that created a bitmask w/the pixels to paint (plus the range optimization suggested by Gerald), and it still ran far too slowly.
Thank you, everyone, for your input.

Why is my computer not showing a speedup when I use parallel code?

So I realize this question sounds stupid (and yes I am using a dual core), but I have tried two different libraries (Grand Central Dispatch and OpenMP), and when using clock() to time the code with and without the lines that make it parallel, the speed is the same. (for the record they were both using their own form of parallel for). They report being run on different threads, but perhaps they are running on the same core? Is there any way to check? (Both libraries are for C, I'm uncomfortable at lower layers.) This is super weird. Any ideas?
EDIT: Added detail for Grand Central Dispatch in response to OP comment.
While the other answers here are useful in general, the specific answer to your question is that you shouldn't be using clock() to compare the timing. clock() measures CPU time which is added up across the threads. When you split a job between cores, it uses at least as much CPU time (usually a bit more due to threading overhead). Search for clock() on this page, to find "If process is multi-threaded, cpu time consumed by all individual threads of process are added."
It's just that the job is split between threads, so the overall time you have to wait is less. You should be using the wall time (the time on a wall clock). OpenMP provides a routine omp_get_wtime() to do it. Take the following routine as an example:
#include <omp.h>
#include <time.h>
#include <math.h>
#include <stdio.h>
int main(int argc, char *argv[]) {
int i, nthreads;
clock_t clock_timer;
double wall_timer;
for (nthreads = 1; nthreads <=8; nthreads++) {
clock_timer = clock();
wall_timer = omp_get_wtime();
#pragma omp parallel for private(i) num_threads(nthreads)
for (i = 0; i < 100000000; i++) cos(i);
printf("%d threads: time on clock() = %.3f, on wall = %.3f\n", \
nthreads, \
(double) (clock() - clock_timer) / CLOCKS_PER_SEC, \
omp_get_wtime() - wall_timer);
}
}
The results are:
1 threads: time on clock() = 0.258, on wall = 0.258
2 threads: time on clock() = 0.256, on wall = 0.129
3 threads: time on clock() = 0.255, on wall = 0.086
4 threads: time on clock() = 0.257, on wall = 0.065
5 threads: time on clock() = 0.255, on wall = 0.051
6 threads: time on clock() = 0.257, on wall = 0.044
7 threads: time on clock() = 0.255, on wall = 0.037
8 threads: time on clock() = 0.256, on wall = 0.033
You can see that the clock() time doesn't change much. I get 0.254 without the pragma, so it's a little slower using openMP with one thread than not using openMP at all, but the wall time decreases with each thread.
The improvement won't always be this good due to, for example, parts of your calculation that aren't parallel (see Amdahl's_law) or different threads fighting over the same memory.
EDIT: For Grand Central Dispatch, the GCD reference states, that GCD uses gettimeofday for wall time. So, I create a new Cocoa App, and in applicationDidFinishLaunching I put:
struct timeval t1,t2;
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for (int iterations = 1; iterations <= 8; iterations++) {
int stride = 1e8/iterations;
gettimeofday(&t1,0);
dispatch_apply(iterations, queue, ^(size_t i) {
for (int j = 0; j < stride; j++) cos(j);
});
gettimeofday(&t2,0);
NSLog(#"%d iterations: on wall = %.3f\n",iterations, \
t2.tv_sec+t2.tv_usec/1e6-(t1.tv_sec+t1.tv_usec/1e6));
}
and I get the following results on the console:
2010-03-10 17:33:43.022 GCDClock[39741:a0f] 1 iterations: on wall = 0.254
2010-03-10 17:33:43.151 GCDClock[39741:a0f] 2 iterations: on wall = 0.127
2010-03-10 17:33:43.236 GCDClock[39741:a0f] 3 iterations: on wall = 0.085
2010-03-10 17:33:43.301 GCDClock[39741:a0f] 4 iterations: on wall = 0.064
2010-03-10 17:33:43.352 GCDClock[39741:a0f] 5 iterations: on wall = 0.051
2010-03-10 17:33:43.395 GCDClock[39741:a0f] 6 iterations: on wall = 0.043
2010-03-10 17:33:43.433 GCDClock[39741:a0f] 7 iterations: on wall = 0.038
2010-03-10 17:33:43.468 GCDClock[39741:a0f] 8 iterations: on wall = 0.034
which is about the same as I was getting above.
This is a very contrived example. In fact, you need to be sure to keep the optimization at -O0, or else the compiler will realize we don't keep any of the calculations and not do the loop at all. Also, the integer that I'm taking the cos of is different in the two examples, but that doesn't affect the results too much. See the STRIDE on the manpage for dispatch_apply for how to do it properly and for why iterations is broadly comparable to num_threads in this case.
EDIT: I note that Jacob's answer includes
I use the omp_get_thread_num()
function within my parallelized loop
to print out which core it's working
on... This way you can be sure that
it's running on both cores.
which is not correct (it has been partly fixed by an edit). Using omp_get_thread_num() is indeed a good way to ensure that your code is multithreaded, but it doesn't show "which core it's working on", just which thread. For example, the following code:
#include <omp.h>
#include <stdio.h>
int main() {
int i;
#pragma omp parallel for private(i) num_threads(50)
for (i = 0; i < 50; i++) printf("%d\n", omp_get_thread_num());
}
prints out that it's using threads 0 to 49, but this doesn't show which core it's working on, since I only have eight cores. By looking at the Activity Monitor (the OP mentioned GCD, so must be on a Mac - go Window/CPU Usage), you can see jobs switching between cores, so core != thread.
Most likely your execution time isn't bound by those loops you parallelized.
My suggestion is that you profile your code to see what is taking most of the time. Most engineers will tell you that you should do this before doing anything drastic to optimize things.
It's hard to guess without any details. Maybe your application isn't even CPU bound. Did you watch CPU load while your code was running? Did it hit 100% on at least one core?
Your question is missing some very crucial details such as what the nature of your application is, what portion of it are you trying to improve, profiling results (if any), etc...
Having said that you should remember several critical points when approaching a performance improvement effort:
Efforts should always concentrate on the code areas which have been proven, by profiling, to be the inefficient
Parallelizing CPU bound code will almost never improve performance (on a single core machine). You will be losing precious time on unnecessary context switches and gaining nothing. You can very easily worsen performance by doing this.
Even if you are parallelizing CPU bound code on a multicore machine, you must remember you never have any guarantee of parallel execution.
Make sure you are not going against these points, because an educated guess (barring any additional details) will say that's exactly what you're doing.
If you are using a lot of memory inside the loop, that might prevent it from being faster. Also you could look into pthread library, to manually handle threading.
I use the omp_get_thread_num() function within my parallelized loop to print out which core it's working on if you don't specify num_threads. For e.g.,
printf("Computing bla %d on core %d/%d ...\n",i+1,omp_get_thread_num()+1,omp_get_max_threads());
The above will work for this pragma
#pragma omp parallel for default(none) shared(a,b,c)
This way you can be sure that it's running on both cores since only 2 threads will be created.
Btw, is OpenMP enabled when you're compiling? In Visual Studio you have to enable it in the Property Pages, C++ -> Language and set OpenMP Support to Yes

Resources