This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I get the execution time of a program in milliseconds in C?
Calculating time of execution with time() function
I tried clock() in time.h but it always gives me 0.0000 seconds ie 0 seconds as output. Is there any way to get execution time in micro or Milli seconds or in any other smaller units?
Precede the execution of your program in shell with "time", i.e.:
user#linux:~$ time c_program_name
Running the following, for example:
sampson-chen#linux:~/src/reviewboard$ time ls -R
Gives the following time results:
real 0m0.046s
user 0m0.008s
sys 0m0.012s
See the manual for time to adjust the display formats / precision / verbosity.
clock should work, you need to define it at the very beginning and print the value at last, check this out: http://www.cplusplus.com/reference/clibrary/ctime/clock/
Clock displays 0.000 all the time because the execution is very fast and execution time is negligible. Try checking out time with some complex algorithms like Tower of Hanoi or NQueesns with big values. Then you'll get the time of execution in some milliseconds. I tried it for Tower of hanoi with 15 discs and it gave me some value for execution.
Related
I want to use a timer to read the data from a simulink block to the workspace during simulation.
I made a simple mdl model composed of a clock connected to a scope.
Then I wrote this simple code:
t=timer('period', 1, 'taskstoexecute', 10, 'executionmode', 'fixedrate');
t.Timerfcn={#TimeStep};
start(t)
function time = TimeStep (~,~)
load_system('mymodel');
set_param('mymodel','SimulationCommand','start');
block='mymodel/Clock';
rto=get_param(block,'runtimeObject');
time=rto.OutputPort(1).Data;
disp(time);
The problem is that when I run the code for simulation time 10, it shows me "0" in work space and repeat it ten times. I assume that it should show me the time from 1 to 10. I have also modifies the solver to a discrete solver with time step=1.
The other thing I do not understand is that when I put a ramp function instead of the clock and change it to:
block='mymodel/Ramp';'
then I receive an error of "too many inputs".
I would appreciate any help.
You have two things that count time and seem to think that one of them is controlling the time in the other. It isn't.
More specifically, you have
A 'Timer' in MATLAB that you have asked to run a certain piece of code once per second over 10 seconds. (Both times are measured in wall clock time.)
Some MATLAB code that loads a Simulink model (if it isn't already loaded); starts the model (if it isn't already started); and gets the value on the output of a Clock block that is in the model (it does this only once each time the code is executed). As with every Simulink model, it will execute as fast as it can until the simulation end time is reached (or something else stops it).
So, in your case, each time the Timer executes, the simulation is started, the value at the output of the clock is obtained/printed (which since it happens very quickly after the start of the model it prints out that the simulation time is 0); and then (because you have a very simple simulation that takes no time at all to finish) the simulation terminates.
The above happens 10 times, each time printing the value of the clock at the start of the simulation, i.e. 0.
To see other values you need to make your simulation run longer - taking at least 1 second of wall clock time. For instance, if you change the solver to fixed step and put in a very small step size, something like 0.000001, then the simulation will probably take several seconds (of wall clock time) to execute.
Now you should see the Timer print different times as sometimes the model will still be executing when the code is called (1 second of wall clock time later).
But fundamentally you need to understand that the Timer is not controlling, and is not dependent upon, the simulation time, and vice-versa.
(I'm not sure about the issue with using Ramp but suspect it's got to with Clock being a fundamental block while Ramp is a masked subsystem.)
I implemented a MIMD genetic algorithm using C and OpenMPI where each process takes care of a independent subpopulation (island model). So, for a population of size 200, an 1-process run operates on the whole population while 2 processes evolve populations of size 100.
So, by measuring the execution time with MPI_Wtime, I'm getting the expected execution time by running on a 2-core machine with ubuntu. However, it disagrees with both ubuntu's time command and perception alone: it's noticeable that running with 2 processes takes longer for some reason.
$time mpirun -n 1 genalg
execution time: 0.570039 s (MPI_Wtime)
real 0m0.618s
user 0m0.584s
sys 0m0.024s
$time mpirun -n 2 genalg
execution time: 0.309784 s (MPI_Wtime)
real 0m1.352s
user 0m0.604s
sys 0m0.064s
For a larger population (4000), I get the following:
$time mpirun -n 1 genalg
execution time: 11.645675 s (MPI_Wtime)
real 0m11.751s
user 0m11.292s
sys 0m0.392s
$time mpirun -n 2 genalg
execution time: 5.872798 s (MPI_Wtime)
real 0m8.047s
user 0m11.472s
sys 0m0.380s
I get similar results whether there's communication between the processes or not, and also tried MPI_Barrier. Also got the same results with gettimeofday, and turning gcc optimization on or off doesn't make much difference.
What is possibly going on? It should run faster with 2 processes, like MPI_Wtime suggests, but in reality it's running slower, matching the real time.
Update: I ran it on another PC and didn't have this issue.
The code:
void runGA(int argc,char* argv[])
{
(initializations)
if(MYRANK == 0)
t1 = MPI_Wtime();
genalg();
Individual* ind = best_found();
MPI_Barrier(MPI_COMM_WORLD);
if(MYRANK != 0)
return;
t2 = MPI_Wtime();
exptime = t2-t1;
printf("execution time: %f s\n",exptime);
}
My guess (and her/his) is that time give the sum of the time used by all cores. It's more like a cost : you have 2 processes on 2 cores, so the cost time is time1+time2 because the second core could be used for another process, so you "lost" this time on this second core. MPI_Wtime() display the actual time spend for the human.
It's maybe the explanation why the real time is lower that user time in the second case. The real time is closer to MPI time than the sum of user ans sys. In the 1st case the initialization time take to much time and probably false the result.
The issue was solved after upgrading Ubuntu Mate 15.10 to 16.04, which came with OpenMPI version 1.10.2 (the previous one was 1.6.5).
I'm using the function clock() in C to get the seconds through ticks but with this little test program bellow I figured out that my processor hasn't being counting the ticks correctly, because the seconds are too much unsynchronized with real time and in addition I had to multiply the result by 100 to get something more similar to seconds but I think it doesnt make sense. In this program, 10s are almost equivalent of 7s in real life. Could someone help me to become the clock() function a bit more precise?
I'm using Beaglebone Black rev C with Kernel 3.8.13-bone70, Debian 4.6.3-14 and gcc version 4.6.3
thanks in advance!
Here's my test program:
#include <stdio.h>
#include <time.h>
int main(){
while(1)
printf("\n%f", (double)100*clock()/CLOCKS_PER_SEC);
return 0;
}
The result returned by the clock() function isn't expected to be synchronized with real time. It returns
the implementation’s best approximation to the processor time used by
the program since the beginning of an implementation-defined era
related only to the program invocation
If you want a high-precision indication of real (wall-clock) time, you'll need to use some system-specific function such as gettimeofday() or clock_gettime() -- or, if your implementation supports it, the standard timespec_get function, added in C11.
Your program calls printf in a loop. I'd expect it to spend most of its time waiting for I/O, yielding to other processes, and for CPU time as indicated by clock() (when converted to seconds) to advance much more slowly that real time. I suspect your multiplication by 100 is throwing off your results; clock()/CLOCKS_PER_SEC should be a correct indication of CPU time.
Note Keith Thompson's answer which points you in the right direction.
However, one might expect a tight infinite loop to use most of the CPU in a given period (assuming nothing else is happening), and therefore any serious deviation between the CPU time spent on this tight infinite loop versus real time might be interesting to explore. To that end, I rigged this short program up:
#include <stdio.h>
#include <time.h>
int main(void) {
time_t t0 = time(NULL);
while(1) {
printf("%f\r", ((double) clock())/CLOCKS_PER_SEC);
fflush(stdout);
if (difftime(time(NULL), t0) > 5.0) {
break;
}
}
return 0;
}
On Windows, I get:
C:\...\Temp> timethis clk.exe
TimeThis : Command Line : clk.exe
TimeThis : Start Time : Mon Apr 27 17:00:34 2015
5.093000
TimeThis : Command Line : clk.exe
TimeThis : Start Time : Mon Apr 27 17:00:34 2015
TimeThis : End Time : Mon Apr 27 17:00:40 2015
TimeThis : Elapsed Time : 00:00:05.172
On *nix, you can use the time CLI utility to measure time.
The timing seems close enough to me.
Note also Zan's point about comms lags.
Keith's answer is most correct but there is another problem with your program even if you did change it to use real time.
You have it creating output in a tight loop with no sleeps at all. If your output device is even a little bit slow the time displayed will be off by however slow the buffering is.
If you ran this over a 9,600 baud serial console for example, it could be as much as 30 seconds behind. At least, that's about how much lag I remember having observed in the far past when we still used 9,600 baud consoles.
I have written some C code which I call form MATLAB after I compile it using MEX. Inside the C code, I measure the time of a part of the computation using the following code:
clock_t begin, end;
double time_elapsed;
begin = clock();
/* do stuff... */
end = clock();
time_elapsed = (double) ((double) (end - begin) / (double) CLOCKS_PER_SEC);
Elapsed time should be the execution time in seconds.
I then output the value time_elapsed to MATLAB (it is properly exported; I checked). Then MATLAB-side I call this C function (after I compile it using MEX) and I measure its execution time using tic and toc. What turns out to be a complete absurdity is that the time I compute using tic and toc is 0.0011s (average on 500 runs, st. dev. 1.4e-4) while the time that is returned by the C code is 0.037s (average on 500 runs, st. dev. 0.0016).
Here one may notice two very strange facts:
The execution time for the whole function is lower than the execution time for a part of the code. Hence, either MATLAB's or C's measurements are strongly inaccurate.
The execution times measured in the C code are very scattered and exhibit very high st. deviation (coeff. of variation 44%, compared to just 13% for tic-toc).
What is going on with these timers?
You're comparing apples to oranges.
Look at Matlab's documentation:
tic - http://www.mathworks.com/help/matlab/ref/tic.html
toc - http://www.mathworks.com/help/matlab/ref/toc.html
tic and toc let you measure real elapsed time.
Now look at the clock function http://linux.die.net/man/3/clock.
In particular,
The clock() function returns an approximation of processor time used by the program.
The value returned is the CPU time used so far as a clock_t; to
get the number of seconds used, divide by CLOCKS_PER_SEC. If the
processor time used is not available or its value cannot be
represented, the function returns the value (clock_t) -1.
So what can account for your difference:
CPU time (measured by clock()) and real elapsed time (measured by tic and toc) are NOT the same. So you would expect that cpu time to be less than elapsed time? Well, maybe. What if within 0.0011s you're driving 10 cores at 100%? That would mean that clock() measurement is 10x that measured with tic and toc. Possible, unlikely.
clock(.) is grossly inaccurate, and consistent with the documentation, it is an approximate cpu time measurement! I suspect that it is pegged to the scheduler quantum size, but I didn't dig through the Linux kernel code to check. I also didn't check on other OSes, but this dude's blog is consistent with that theory.
So what to do... for starters, compare apples to apples! Next, make sure you take into account timer resolution.
I use time command on linux to measure how long my program took, and in my code I have put timers to calculate time
time took calculated by program: 71.320 sec
real 1m27.268s
user 1m7.607s
sys 0m3.785s
I don't know why my program took real time more than calculated, how to find the reason and resolve it?
======================================================
here is how I calculate time in my code;
clock_t cl;
cl = clock();
do_some_work();
cl = clock() - cl;
float seconds = 1.0 * cl / CLOCKS_PER_SEC;
printf("time took: %.3f sec\n", seconds);
There always is overhead for starting up the process, starting the runtime, closing the program and time itself probably also has overhead.
On top of that, in a multi-process operating system your process can be "switched-out", meaning that other processes run while yours in put on hold. This can mess with timings too.
Let me explain the output of time:
real means the actual clock time, including all overhead.
user is time spent in the actual program.
sys is time spent by the kernel system (the switching out I talked about earlier, for example)
Note that user + sys is very close to your time: 1m7.607s + 0m3.785s == 71.392s.
Finally, how did you calculate the time? Without that information it's hard to tell exactly what the problem (if any) is.