The MPI_Sendrecv_replace function is excessively time consuming. - c

I called the MPI_Sendrecv_replace several times in my code. But the function behaves very strangely. In the first loop (5000 iterations), MPI_Sendrecv_replace costs about 30 seconds totally. However, in the second loop the function used only 1 second. I am very curious about the reason because I want to optimize the code. The code is running with 32 processors. The machine is one node with 4 Intel(R) Xeon(R) CPU E5-4620 v2.
Part of my code is here:
for (ishot=1;ishot<=nshots;ishot+=SHOTINC){
...
for (nt=1;nt<=NT;nt++){
...
MPI_Sendrecv_replace(&bufferlef_to_rig[1][1],NY*fdo3,MPI_FLOAT,INDEX[1],TAG1,INDEX[2],TAG1,MPI_COMM_WORLD,&status);
...
}
for (nt=1;nt<=NT;nt++){
...
MPI_Sendrecv_replace(&bufferlef_to_rig[1][1],NY*fdo3,MPI_FLOAT,INDEX[1],TAG1,INDEX[2],TAG1,MPI_COMM_WORLD,&status);
...
}
}
And I used the MPI_Wtime to count the time costed by MPI_Sendrecv_replace.

It seems that the problem is the performance bug of MPI_Sendrecv_replace. When I replace the MPI_Sendrecv_replace with MPI_Sendrecv, the program behaves normally and the time-consuming problem disappear. I found the bug report in https://github.com/open-mpi/ompi/issues/93 and http://www.open-mpi.org/community/lists/users/2010/01/11683.php

Related

OpenCL: comparing the time required to add two arrays of integers on available platforms/devices

I'm very new to the whole OpenCL world so I'm following some beginners tutorials. I'm trying to combine this and this to compare the the time required to add two arrays together on different devices. However I'm getting confusing results. Considering that the code is too long I made this GitHub Gist.
On my mac I have 1 platform with 3 devices. When I assign the j in
cl_command_queue command_queue = clCreateCommandQueue(context, device_id[j], 0, &ret);
manually to 0 it seems to run the calculation on CPU (about 5.75 seconds). when putting 1 and 2 then calculation time drops drastically (0.01076 seconds). Which I assume is because the calculation is being ran on my Intel or AMD GPU. But Then there are some issues:
I can adjust the j to any higher numbers and it still seems to run on GPUs.
When I put all the calculation in a loop, the time measured for all the devices are the same as calculating on CPU (as I persume).
The time required to do the calculation for all 0<j are suspiciously close. I wonder if they are really being ran on different devices.
I have clearly no clue about OpenCL so I would appreciate if you could take a look at my code and let me know what are my mistake(s) and how I can solve it/them. Or maybe point me towards a good example which runs a calculation on different devices and compares the time.
P.S. I have also posted this question here in Reddit
Before submitting a question for an issue you are having, always remember to check for errors (specifically, in this case, that every API call returns CL_SUCCESS). The results are meaningless otherwise.
In the specific case, the problem in your code is that when getting the device IDs, you're only getting one device ID (line 60, third argument), meaning that everything else in the buffer is bogus, and results for j > 0 are meaningless.
The only surprising thing is that it doesn't crash.
Also, when checking runtimes, use OpenCL events, not host-side clock times. In your case you're at least doing after the clFinish, so you are ensuring that the kernel execution terminates, but you're essentially counting the time necessary for all the setup, rather than just the copy time.

Regarding CPU utilization

Considering the below piece of C code, I expected the CPU utilization to go up to 100% as the processor would try to complete the job (endless in this case) given to it. On running the executable for 5 mins, I found the CPU to go up to a max. of 48%. I am running Mac OS X 10.5.8; processor: Intel Core 2 Duo; Compiler: GCC 4.1.
int i = 10;
while(1) {
i = i * 5;
}
Could someone please explain why the CPU usage does not go up to 100%? Does the OS limit the CPU from reaching 100%?
Please note that if I added a "printf()" inside the loop the CPU hits 88%. I understand that in this case, the processor also has to write to the standard output stream hence the sharp rise in usage.
Has this got something to do with the amount of job assigned to the processor per unit time?
Regards,
Ven.
You have a multicore processor and you are in a single thread scenario, so you will use only one core full throttle ... Why do you expect the overall processor use go to 100% in a similar context ?
Run two copies of your program at the same time. These will use both cores of your "Core 2 Duo" CPU and overall CPU usage will go to 100%
Edit
if I added a "printf()" inside the loop the CPU hits 88%.
The printf send some characters to the terminal/screen. Sending information, Display and Update is handeled by code outside your exe, this is likely to be executed on another thread. But displaying a few characters does not need 100% of such a thread. That is why you see 100% for Core 1 and 76% for Core 2 which results in the overal CPU usage of 88% what you see.

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

Time difference for same code of multithreading on different processors?

Hypothetical Question.
I wrote 1 multithreading code, which used to form 8 threads and process the data on different threads and complete the process. I am also using semaphore in the code. But it is giving me different execution time on different machines. Which is OBVIOUS!!
Execution time for same code:
On Intel(R) Core(TM) i3 CPU Machine: 36 sec
On AMD FX(tm)-8350 Eight-Core Processor Machine : 32 sec
On Intel(R) Core(TM) i5-2400 CPU Machine : 16.5 sec
So, my question is,
Is there any kind of setting/variable/command/switch i am missing which could be enabled in higher machine but not enabled in lower machine, which is making higher machine execution time faster? Or, is it the processor only, because of which the time difference is.
Any kind of help/suggestions/comments will be helpful.
Operating System: Linux (Centos5)
Multi-threading benchmarks should be performed with significant statistical sampling (ex: around 50 experiments per machines). Furthermore, the "environement" in which the program runs is important too (ex: was firefox running at the same time or not).
Also, depending on resources consumptions, runtimes can vary. In other words, without a more complete portrait of your experimental conditions, it's impossible to answer your question.
Some observations I have made from my personnal experiment:
Huge memory consumption can alter the results depending on the swapping settings on the machine.
Two "identical" machines with the same OS installed under the same conditions can show different results.
When total throughput is small compared to 5 mins, results appear pretty random.
etc.
I used to have a problem about time measure.My problem is the time in multithread is larger than that in single thread. Finally I found the problem is that not to measure the time in each thread and sum them but to measure out of the all thread. For example:
Wrong measure:
int main(void)
{
//create_thread();
//join_thread();
//sum the time
}
void thread(void *)
{
//measure time in thread
}
Right measure:
int main(void)
{
//record start time
//create_thread();
//join_thread();
//record end time
//calculate the diff
}
void thread(void *)
{
//measure time in thread
}

How can I find the execution time of a section of my program in C?

I'm trying to find a way to get the execution time of a section of code in C. I've already tried both time() and clock() from time.h, but it seems that time() returns seconds and clock() seems to give me milliseconds (or centiseconds?) I would like something more precise though. Is there a way I can grab the time with at least microsecond precision?
This only needs to be able to compile on Linux.
You referred to clock() and time() - were you looking for gettimeofday()?
That will fill in a struct timeval, which contains seconds and microseconds.
Of course the actual resolution is up to the hardware.
For what it's worth, here's one that's just a few macros:
#include <time.h>
clock_t startm, stopm;
#define START if ( (startm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define STOP if ( (stopm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define PRINTTIME printf( "%6.3f seconds used by the processor.", ((double)stopm-startm)/CLOCKS_PER_SEC);
Then just use it with:
main() {
START;
// Do stuff you want to time
STOP;
PRINTTIME;
}
From http://ctips.pbwiki.com/Timer
You want a profiler application.
Search keywords at SO and search engines: linux profiling
Have a look at gettimeofday,
clock_*, or get/setitimer.
Try "bench.h"; it lets you put a START_TIMER; and STOP_TIMER("name"); into your code, allowing you to arbitrarily benchmark any section of code (note: only recommended for short sections, not things taking dozens of milliseconds or more). Its accurate to the clock cycle, though in some rare cases it can change how the code in between is compiled, in which case you're better off with a profiler (though profilers are generally more effort to use for specific sections of code).
It only works on x86.
You might want to google for an instrumentation tool.
You won't find a library call which lets you get past the clock resolution of your platform. Either use a profiler (man gprof) as another poster suggested, or - quick & dirty - put a loop around the offending section of code to execute it many times, and use clock().
gettimeofday() provides you with a resolution of microseconds, whereas clock_gettime() provides you with a resolution of nanoseconds.
int clock_gettime(clockid_t clk_id, struct timespec *tp);
The clk_id identifies the clock to be used. Use CLOCK_REALTIME if you want a system-wide clock visible to all processes. Use CLOCK_PROCESS_CPUTIME_ID for per-process timer and CLOCK_THREAD_CPUTIME_ID for a thread-specific timer.
It depends on the conditions.. Profilers are nice for general global views however if you really need an accurate view my recommendation is KISS. Simply run the code in a loop such that it takes a minute or so to complete. Then compute a simple average based on the total run time and iterations executed.
This approach allows you to:
Obtain accurate results with low resolution timers.
Not run into issues where instrumentation interferes with high speed caches (l2,l1,branch..etc) close to the processor. However running the same code in a tight loop can also provide optimistic results that may not reflect real world conditions.
Don't know which enviroment/OS you are working on, but your timing may be inaccurate if another thread, task, or process preempts your timed code in the middle. I suggest exploring mechanisms such as mutexes or semaphores to prevent other threads from preemting your process.
If you are developing on x86 or x64 why not use the Time Stamp Counter: RDTSC.
It will be more reliable then Ansi C functions like time() or clock() as RDTSC is an atomic function. Using C functions for this purpose can introduce problems as you have no guarantee that the thread they are executing in will not be switched out and as a result the value they return will not be an accurate description of the actual execution time you are trying to measure.
With RDTSC you can better measure this. You will need to convert the tick count back into a human readable time H:M:S format which will depend on the processors clock frequency but google around and I am sure you will find examples.
However even with RDTSC you will be including the time your code was switched out of execution, while a better solution than using time()/clock() if you need an exact measurement you will have to turn to a profiler that will instrument your code and take into account when your code is not actually executing due to context switches or whatever.

Resources