I would like to know what lines of C code to add to a program so that it tells me the total time that the program takes to run. I guess there should be counter initialization near the beginning of main and one after the main function ends. Is the right header clock.h?
Thanks a lot...
Update I have a Win Xp machine. Is it just adding clock() at the beginning and another clock() at the end of the program? Then I can estimate the time difference. Yes, you're right it's time.h.
Here's my code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <share.h>
#include <time.h>
void f(long double fb[], long double fA, long double fB);
int main() {
clock_t start, end;
start = clock();
const int ARRAY_SIZE = 11;
long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE);
int i;
long double A, B;
if (z == NULL) {
printf("Out of memory\n");
exit(-1);
}
A = 0.5;
B = 2;
for (i = 0; i < ARRAY_SIZE; i++) {
z[i] = 0;
}
z[1] = 5;
f(z, A, B);
for (i = 0; i < ARRAY_SIZE; i++)
printf("z is %.16Le\n", z[i]);
free(z);
z = NULL;
end = clock();
printf("Took %ld ticks\n", end-start);
printf("Took %f seconds\n", (double)(end-start)/CLOCKS_PER_SEC);
return 0;
}
void f(long double fb[], long double fA, long double fB) {
fb[0] = fb[1]* fA;
fb[1] = fb[1] - 1;
return;
}
Some errors with MVS2008:
testim.c(16) : error C2143: syntax error : missing ';' before 'const'
testim.c(18) :error C2143: syntax error : missing ';' before 'type'
testim.c(20) :error C2143: syntax error : missing ';' before 'type'
testim.c(21) :error C2143: syntax error : missing ';' before 'type'
testim.c(23) :error C2065: 'z' : undeclared identifier
testim.c(23) :warning C4047: '==' : 'int' differs in levels of indirection from 'void *'
testim.c(28) : error C2065: 'A' : undeclared identifier
testim.c(28) : warning C4244: '=' : conversion from 'double' to 'int', possible loss of data
and it goes to 28 errors. Note that I don't have any errors/warnings without your clock codes.
LATEST NEWS: I unfortunately didn't get a good reply here. But after a search on Google, the code is working. Here it is:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
void f(long double fb[], long double fA);
int main() {
clock_t start = clock();
const int ARRAY_SIZE = 11;
long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE);
int i;
long double A;
if (z == NULL) {
printf("Out of memory\n");
exit(-1);
}
A = 0.5;
for (i = 0; i < ARRAY_SIZE; i++) {
z[i] = 0;
}
z[1] = 5;
f(z, A);
for (i = 0; i < ARRAY_SIZE; i++)
printf("z is %.16Le\n", z[i]);
free(z);
z = NULL;
printf("Took %f seconds\n", ((double)clock()-start)/CLOCKS_PER_SEC);
return 0;
}
void f(long double fb[], long double fA) {
fb[0] = fb[1]* fA;
fb[1] = fb[1] - 1;
return;
}
Cheers
Update on April 10: Here's a better solution thanks to "JustJeff"
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
void f(long double fb[], long double fA);
const int ARRAY_SIZE = 11;
int main(void)
{
long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE);
int i;
long double A;
LARGE_INTEGER freq;
LARGE_INTEGER t0, tF, tDiff;
double elapsedTime;
double resolution;
if (z == NULL) {
printf("Out of memory\n");
exit(-1);
}
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&t0);
// code to be timed goes HERE
{
A = 0.5;
for (i = 0; i < ARRAY_SIZE; i++) {
z[i] = 0;
}
z[1] = 5;
f(z, A);
for (i = 0; i < ARRAY_SIZE; i++)
printf("z is %.16Le\n", z[i]);
free(z);
z = NULL;
}
QueryPerformanceCounter(&tF);
tDiff.QuadPart = tF.QuadPart - t0.QuadPart;
elapsedTime = tDiff.QuadPart / (double) freq.QuadPart;
resolution = 1.0 / (double) freq.QuadPart;
printf("Your performance counter ticks %I64u times per second\n", freq.QuadPart);
printf("Resolution is %lf nanoseconds\n", resolution*1e9);
printf("Code under test took %lf sec\n", elapsedTime);
return 0;
}
void f(long double fb[], long double fA) {
fb[0] = fb[1]* fA;
fb[1] = fb[1] - 1;
return;
}
It works both with MVS2008 and with Borland C++ builderX from 2003.
On Unix (I think) systems, the time command with the name of your program as a command-line argument will tell you the time the program takes to run. Note that this measures the execution time of the whole program. If you need to test just one part, include time.h and use the clock function, more or less like this:
#include <time.h>
int main() {
clock_t start;
clock_t end;
int function_time;
start = clock();
function_you_want_to_time();
end = clock();
/* Get time in milliseconds */
function_time = (double)(end - start) / (CLOCKS_PER_SEC / 1000.0);
return 0;
}
That will give you the time in milliseconds (notice the / 1000.0 part). If you want seconds, remove / 1000.0. If you want plain clock ticks, which will be more accurate, make function_time a clock_t and replace the function_time = ... line with:
function_time = end - start;
To time the whole program, I suggest to make a function called _main() or something, move all your program related code from main() (not the timing code!) to that function, and calling it from main(). That way, it's more clear what's the timing code and what's the rest of the program.
If you need a total for your program then in Linux console:
$ time myProgram
You can also use time.h in your code.
#include <time.h>
int main(){
time_t start, end;
start = time(0);
/* some working code */
end = time(0);
printf("%i seconds", end - start );
}
You could use the clock() function (in <time.h>) if you want to test a block of code, or the time program on *nix, as another answerer suggested. E.g.
> time ./foo my args
For clock, you need to subtract the difference between two checkpoints. E.g.
#include <time.h>
void f() {
clock_t start, end;
start = clock();
// some long code.
end = clock();
printf("Took %ld ticks\n", end-start);
// or in (fractional) seconds.
printf("Took %f seconds\n", (double)(end-start)/CLOCKS_PER_SEC);
}
Update
Regarding your new errors, you can't mix code and declarations in VC. You mustn't call any functions then continue to declare variables. Declare all your vars at the top, or compile with C++ mode.
If you're on windows and you want to measure stuff down in the microseconds, investigate QueryPerformanceCounter() and QueryPerformanceFrequency(). On many systems these can resolve full processor clock periods, third of a nanosecond stuff, and I don't believe I've ever seen it any more coarse than 3.5795MHz, still well under a microsecond.
You call QueryPerformanceFrequency() to determine how many counts per second the counter counts. Then call QueryPerformanceCounter() before your code under test, and then again after. Delta the two readings of QPC and divide by the period from QPF and you get the elapsed time between the two QPC calls. Like so ...
LARGE_INTEGER freq;
LARGE_INTEGER t0, tF, tDiff;
double elapsedTime;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&t0);
// code to be timed goes HERE
QueryPerformanceCounter(&tF);
tDiff.QuadPart = tF.QuadPart - t0.QuadPart;
elapsedTime = tDiff.QuadPart / (double) freq.QuadPart;
// elapsedTime now has your measurement, w/resolution given by freq
Evidently these access a hardware counting device that is tied to some system oscillator on the main board, in which case they shouldn't suffer jitter from software load. The resolution you get depends on your system.
FOLLOW UP
Here's a very simple complete program that demonstrates the interface:
#include <windows.h>
int main(void)
{
LARGE_INTEGER freq;
LARGE_INTEGER t0, tF, tDiff;
double elapsedTime;
double resolution;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&t0);
// code to be timed goes HERE
{
Sleep(10);
}
QueryPerformanceCounter(&tF);
tDiff.QuadPart = tF.QuadPart - t0.QuadPart;
elapsedTime = tDiff.QuadPart / (double) freq.QuadPart;
resolution = 1.0 / (double) freq.QuadPart;
printf("Your performance counter ticks %I64u times per second\n", freq.QuadPart);
printf("Resolution is %lf nanoseconds\n", resolution*1e9);
printf("Code under test took %lf sec\n", elapsedTime);
return 0;
}
For something as simple as this, it's quicker to skip the IDE, just save it in a foo.c file and (assuming MS VS 2008) use the command line
cl foo.c
to build it. Here's the output on my system:
Your performance counter ticks 3579545 times per second
Resolution is 279.365115 nanoseconds
Code under test took 0.012519 sec
You probably want time.h, and the clock() function.
You can try GetTickCount also. Clock will also work fine. But, I guess clock values will change if some other process or some one manually changes the system time, wheareas GetTickCount values are not affected by that.
Related
I'm working in first step of a Project and I have to calculate the executing time of a Summation and a multiplication... I wrote the next code for the summation:
#include <stdio.h>
#include <time.h>
int main(int argc, char const *argv[]) {
long muestras = 100000000;
long resultado=0;
float inicial = clock();
printf("Tiempo inicial: %f\n",inicial);
for(int i = 1; i <muestras;i+=1){
resultado = resultado + i;
}
float final = clock();
printf("Tiempo final: %f\n",final);
float total = (final-inicial)/((double)CLOCKS_PER_SEC);
printf("tiempo = %f",total);
//printf("tiempo = %f",((double)clock() - start));
printf("\n");
printf("resultado = %d",resultado);
return 0;
}
and work perfectly, but I wrote the next code for the multiplication, and the initial and final time is 0... I don't know why, I can't understand ...
#include <stdio.h>
#include <time.h>
int main(int argc, char const *argv[]) {
long muestras = 10;
long long resultado=1;
float inicial = clock();
printf("Tiempo inicial: %f\n",inicial);
for(int i = 1; i <muestras;i+=1){
if (resultado>20) {
resultado = (resultado * i)/20;
}else{
resultado = resultado * i;
}
}
float final = clock();
printf("Tiempo final: %f\n",final);
float total = (final-inicial);
///((double)CLOCKS_PER_SEC);
printf("tiempo = %f",total);
//printf("tiempo = %f",((double)clock() - start));
printf("\n");
printf("resultado = %lli",resultado);
return 0;
}
I know that have overflow, but no matter what size of samples take, it's the same result.... please help ... sorry for my bad english, greats from Colombia! :)
The return value from clock is of type clock_t, not float. Also, the return value is not seconds or anything such, but "clocks", which you can convert to seconds by dividing by clocks per sec.
You should do something like this instead:
clock_t initial = clock();
...
clock_t final = clock();
double total = (final - initial) / (double)CLOCKS_PER_SEC;
printf("time delta = %f", total);
Note that there is no way of printfing a value of type clock_t correctly.
The return value from clock() is of type clock_t not float, and it represents the number of ticks since the beginning of the program. You should subtract them and then convert to double to divide by CLICKS_PER_SEC, as in Antti's answer.
Also, your multiplication program only executes 10 muestras, which means it may entirely finish in the first clock tick. Increase that to a large number and you may see different elapsed time.
This question already has answers here:
What’s the correct way to use printf to print a clock_t?
(5 answers)
Closed 6 years ago.
Currently I'm trying to time a process to compare with a sample program I found online that used opencl. Yet when I try to time this process I'll get very strange values as shown below.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <CL/cl.h>
#include <time.h>
int main(void) {
int n = 100000;
size_t bytes = n * sizeof(double);
double *h_a;
double *h_b;
double *h_c;
h_a = (double*)malloc(bytes);
h_b = (double*)malloc(bytes);
h_c = (double*)malloc(bytes);
int i;
for(i = 0; i < n; i++)
{
h_a[i] = sinf(i)*sinf(i);
h_b[i] = cosf(i)*cosf(i);
}
clock_t start = clock();
for(i = 0; i < n; i++)
h_c[i] = h_a[i] + h_b[i];
clock_t stop = clock();
double time = (stop - start) / CLOCKS_PER_SEC;
printf("Clocks per Second: %E\n", CLOCKS_PER_SEC);
printf("Clocks Taken: %E\n", stop - start);
printf("Time Taken: %E\n", time);
free(h_a);
free(h_b);
free(h_c);
system("PAUSE");
return 0;
}
Results:
C:\MinGW\Work>systesttime
Clocks per Second: 1.788208E-307
Clocks Taken: 1.788208E-307
Time Taken: 0.000000E+000
Press any key to continue . . .
Its giving very strange values for everything there. I understand that it must be around 1,000,000 and I don't know why its doing this. It used to give values around 6E+256 for everything which was equally concerning.
It looks like your clock_t is not double, so %E is the wrong format specifier.
It's probably long. Try this:
printf("Clocks per Second: %E\n", (double)CLOCKS_PER_SEC);
I'm trying to get the execution time of my program using the time header and can't find any resources that simply use <time.h> and not <sys/time.h>.
I tried
time_t startTime;
time_t endTime;
long double execTime;
/* Start timer */
time(&startTime);
..STUFF THAT TAKES TIME..
time(&endTime);
execTime = difftime(endTime, startTime);
printf("Computing took %Lf\n", execTime * 1000);
But this prints out 0 every single time.. I'm guessing because time is an integer and my process takes less than a second.
How can I show execution in milliseconds?
Thank you
Although clock_gettime should be the preferred way, it is Posix, not standard C. They have only clock. It has a lot of disadvantages but is good enough for a quick and dirty measurement.
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
int main()
{
int i, j, a[1000000] = { 0 };
clock_t start, stop;
srand(0xdeadbeef);
start = clock();
// play around with these values, especially with j
for (j = 0; j < 100; j++) {
for (i = 0; i < 1000000; i++) {
a[i] = rand() % 123;
a[i] += 123;
}
}
stop = clock();
printf("Time %.10f seconds\n", (double) (stop - start) / CLOCKS_PER_SEC);
exit(EXIT_SUCCESS);
}
The correct way to measure time is to use clock_gettime(CLOCK_MONOTONIC, &ts_current);.
Also, gettimeofday() should be avoided.
A complete example of using clock_gettime() to measure time difference (both seconds and nanoseconds, which you could convert to milliseconds):
#include <stdio.h>
#include <time.h>
struct timespec diff(struct timespec start, struct timespec end)
{
struct timespec temp;
if ((end.tv_nsec-start.tv_nsec)<0) {
temp.tv_sec = end.tv_sec-start.tv_sec-1;
temp.tv_nsec = 1000000000+end.tv_nsec-start.tv_nsec;
} else {
temp.tv_sec = end.tv_sec-start.tv_sec;
temp.tv_nsec = end.tv_nsec-start.tv_nsec;
}
return temp;
}
int main()
{
struct timespec time1, time2;
int temp = 0;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
for (int i = 0; i< 242000000; i++)
temp+=temp;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
printf("Time difference: %ld [s] %ld [ns]", (long) diff(time1,time2).tv_sec, (long) diff(time1,time2).tv_nsec);
return 0;
}
The time function only has 1 second resolution. Instead, use gettimeofday which has microsecond resolution.
struct timeval tstart, tend;
gettimeofday(&tstart, NULL);
// do something that takes time
gettimeofday(&tend,NULL);
I've searched in the Web but I've only found a way for do it, but in this way it returns in seconds instead of milliseconds.
My code is:
#include <stdio.h>
#include <assert.h>
#include <time.h>
int main(void)
{
int solucion;
time_t start, stop;
clock_t ticks;
long count;
time(&start);
solucion = divisores_it(92000000, 2);
time(&stop);
printf("Finnished in %f seconds. \n", difftime(stop, start));
return 0;
}
A cross platform way is to use ftime.
Windows specific link here: http://msdn.microsoft.com/en-us/library/aa297926(v=vs.60).aspx
Example below.
#include <stdio.h>
#include <sys\timeb.h>
int main()
{
struct timeb start, end;
int diff;
int i = 0;
ftime(&start);
while(i++ < 999) {
/* do something which takes some time */
printf(".");
}
ftime(&end);
diff = (int) (1000.0 * (end.time - start.time)
+ (end.millitm - start.millitm));
printf("\nOperation took %u milliseconds\n", diff);
return 0;
}
I ran the code above and traced through it using VS2008 and saw it actually calls the windows GetSystemTimeAsFileTime function.
Anyway, ftime will give you milliseconds precision.
The solution below seems OK to me. What do you think?
#include <stdio.h>
#include <time.h>
long timediff(clock_t t1, clock_t t2) {
long elapsed;
elapsed = ((double)t2 - t1) / CLOCKS_PER_SEC * 1000;
return elapsed;
}
int main(void) {
clock_t t1, t2;
int i;
float x = 2.7182;
long elapsed;
t1 = clock();
for (i=0; i < 1000000; i++) {
x = x * 3.1415;
}
t2 = clock();
elapsed = timediff(t1, t2);
printf("elapsed: %ld ms\n", elapsed);
return 0;
}
Reference: http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.15.html#clock
For Windows, GetSystemTime() is what you want. For POSIX, gettimeofday().
GetSystemTime() uses the structure SYSTEMTIME, which provides milli-second resolution.
More on this here.
This code piece works. This is based on the answer from Angus Comber:
#include <sys/timeb.h>
uint64_t system_current_time_millis()
{
#if defined(_WIN32) || defined(_WIN64)
struct _timeb timebuffer;
_ftime(&timebuffer);
return (uint64_t)(((timebuffer.time * 1000) + timebuffer.millitm));
#else
struct timeb timebuffer;
ftime(&timebuffer);
return (uint64_t)(((timebuffer.time * 1000) + timebuffer.millitm));
#endif
}
DWORD start = GetTickCount();
executeSmth();
printf("Elapsed: %i ms", GetTickCount() - start);
P.S. This method has some limitations. See GetTickCount.
Is there a simple library to benchmark the time it takes to execute a portion of C code? What I want is something like:
int main(){
benchmarkBegin(0);
//Do work
double elapsedMS = benchmarkEnd(0);
benchmarkBegin(1)
//Do some more work
double elapsedMS2 = benchmarkEnd(1);
double speedup = benchmarkSpeedup(elapsedMS, elapsedMS2); //Calculates relative speedup
}
It would also be great if the library let you do many runs, averaging them and calculating the variance in timing!
Use the function clock() defined in time.h:
startTime = (float)clock()/CLOCKS_PER_SEC;
/* Do work */
endTime = (float)clock()/CLOCKS_PER_SEC;
timeElapsed = endTime - startTime;
Basically, all you want is a high resolution timer. The elapsed time is of course just a difference in times and the speedup is calculated by dividing the times for each task. I have included the code for a high resolution timer that should work on at least windows and unix.
#ifdef WIN32
#include <windows.h>
double get_time()
{
LARGE_INTEGER t, f;
QueryPerformanceCounter(&t);
QueryPerformanceFrequency(&f);
return (double)t.QuadPart/(double)f.QuadPart;
}
#else
#include <sys/time.h>
#include <sys/resource.h>
double get_time()
{
struct timeval t;
struct timezone tzp;
gettimeofday(&t, &tzp);
return t.tv_sec + t.tv_usec*1e-6;
}
#endif
Benchmark C code easily
#include <time.h>
int main(void) {
clock_t start_time = clock();
// code or function to benchmark
double elapsed_time = (double)(clock() - start_time) / CLOCKS_PER_SEC;
printf("Done in %f seconds\n", elapsed_time);
}
Easy benchmark of multi-threaded C code
If you want to benchmark multithreaded program you first need to take a closer look at clock:
Description
The clock() function returns an approximation of processor time
used by the program.
Return value
The value returned is the CPU time used so far as a clock_t; to
get the number of seconds used, divide by CLOCKS_PER_SEC. If the
processor time used is not available or its value cannot be
represented, the function returns the value (clock_t)(-1)
Hence it is very important to divide your elapsed_time by the number of threads in order to get the execution time of your function:
#include <time.h>
#include <omp.h>
#define THREADS_NB omp_get_max_threads()
#pragma omp parallel for private(i) num_threads(THREADS_NB)
clock_t start_time = clock();
// code or function to benchmark
double elapsed_time = (double)(clock() - start_time) / CLOCKS_PER_SEC;
printf("Done in %f seconds\n", elapsed_time / THREADS_NB); // divide by THREADS_NB!
Example
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <time.h>
#include <omp.h>
#define N 20000
#define THREADS_NB omp_get_max_threads()
void init_arrays(double *a, double *b) {
memset(a, 0, sizeof(a));
memset(b, 0, sizeof(b));
for (int i = 0; i < N; i++) {
a[i] += 1.0;
b[i] += 1.0;
}
}
double func2(double i, double j) {
double res = 0.0;
while (i / j > 0.0) {
res += i / j;
i -= 0.1;
j -= 0.000003;
}
return res;
}
double single_thread(double *a, double *b) {
double res = 0;
int i, j;
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
if (i == j) continue;
res += func2(a[i], b[j]);
}
}
return res;
}
double multi_threads(double *a, double *b) {
double res = 0;
int i, j;
#pragma omp parallel for private(j) num_threads(THREADS_NB) reduction(+:res)
for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
if (i == j) continue;
res += func2(a[i], b[j]);
}
}
return res;
}
int main(void) {
double *a, *b;
a = (double *)calloc(N, sizeof(double));
b = (double *)calloc(N, sizeof(double));
init_arrays(a, b);
clock_t start_time = clock();
double res = single_thread(a, b);
double elapsed_time = (double)(clock() - start_time) / CLOCKS_PER_SEC;
printf("Default: Done with %f in %f sd\n", res, elapsed_time);
start_time = clock();
res = multi_threads(a, b);
elapsed_time = (double)(clock() - start_time) / CLOCKS_PER_SEC;
printf("With OMP: Done with %f in %f sd\n", res, elapsed_time / THREADS_NB);
}
Compile with:
gcc -O3 multithread_benchmark.c -fopenmp && time ./a.out
Output:
Default: Done with 2199909813.614555 in 4.909633 sd
With OMP: Done with 2199909799.377532 in 1.708831 sd
real 0m6.703s (from time function)
In POSIX, try getrusage. The relevant argument is RUSAGE_SELF and the relevant fields are ru_utime.tv_sec and ru_utime.tv_usec.
There may be existing utilities that help with this, but I suspect most will use some kind of sampling or possibly injection. But to get specific sections of code timed, you will probably have to add in calls to a timer like you show in your example. If you are using Windows, then the high performance timer works. I answered a similar question and showed example code that will do that. There are similar methods for Linux.