I wrote a test to solve a problem I am having with a program
#include <stdio.h>
#include <time.h>
int main()
{
double time;
clock_t begin, end;
begin = clock();
for(int i = 0; i < 100; ++i)
{
}
end = clock();
time = (double)(end - begin) / CLOCKS_PER_SEC;
printf("%lf", time);
}
the output for the online C compiler:
0.000002
the output for VS code:
0.000000
can someone explain why VS code is rounding to 0?
So I was timing a function, and fell over a weird timing thing,
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
int main() {
long int time = 0, itrs = 100;
struct timespec start, stop;
for (int i = 0; i < itrs; i++) {
clock_gettime(CLOCK_REALTIME, &start);
f();
clock_gettime(CLOCK_REALTIME, &stop);
time += stop.tv_nsec - start.tv_nsec;
}
printf("%ld", time/itrs);
return 0;
}
When I run the loop a higher number of times, the function runs faster.
When i run it for instance 100 times, the function takes about 100 ns, but when i run 1000000 its about 40 ns.
Can anyone explain why this is so?
I've searched in the Web but I've only found a way for do it, but in this way it returns in seconds instead of milliseconds.
My code is:
#include <stdio.h>
#include <assert.h>
#include <time.h>
int main(void)
{
int solucion;
time_t start, stop;
clock_t ticks;
long count;
time(&start);
solucion = divisores_it(92000000, 2);
time(&stop);
printf("Finnished in %f seconds. \n", difftime(stop, start));
return 0;
}
A cross platform way is to use ftime.
Windows specific link here: http://msdn.microsoft.com/en-us/library/aa297926(v=vs.60).aspx
Example below.
#include <stdio.h>
#include <sys\timeb.h>
int main()
{
struct timeb start, end;
int diff;
int i = 0;
ftime(&start);
while(i++ < 999) {
/* do something which takes some time */
printf(".");
}
ftime(&end);
diff = (int) (1000.0 * (end.time - start.time)
+ (end.millitm - start.millitm));
printf("\nOperation took %u milliseconds\n", diff);
return 0;
}
I ran the code above and traced through it using VS2008 and saw it actually calls the windows GetSystemTimeAsFileTime function.
Anyway, ftime will give you milliseconds precision.
The solution below seems OK to me. What do you think?
#include <stdio.h>
#include <time.h>
long timediff(clock_t t1, clock_t t2) {
long elapsed;
elapsed = ((double)t2 - t1) / CLOCKS_PER_SEC * 1000;
return elapsed;
}
int main(void) {
clock_t t1, t2;
int i;
float x = 2.7182;
long elapsed;
t1 = clock();
for (i=0; i < 1000000; i++) {
x = x * 3.1415;
}
t2 = clock();
elapsed = timediff(t1, t2);
printf("elapsed: %ld ms\n", elapsed);
return 0;
}
Reference: http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.15.html#clock
For Windows, GetSystemTime() is what you want. For POSIX, gettimeofday().
GetSystemTime() uses the structure SYSTEMTIME, which provides milli-second resolution.
More on this here.
This code piece works. This is based on the answer from Angus Comber:
#include <sys/timeb.h>
uint64_t system_current_time_millis()
{
#if defined(_WIN32) || defined(_WIN64)
struct _timeb timebuffer;
_ftime(&timebuffer);
return (uint64_t)(((timebuffer.time * 1000) + timebuffer.millitm));
#else
struct timeb timebuffer;
ftime(&timebuffer);
return (uint64_t)(((timebuffer.time * 1000) + timebuffer.millitm));
#endif
}
DWORD start = GetTickCount();
executeSmth();
printf("Elapsed: %i ms", GetTickCount() - start);
P.S. This method has some limitations. See GetTickCount.
I have a program in C which has to execute a series of another programs. I need to get the execution time of each of these programs, in order to create a log of these times.
I though of using system() to run each program, but I don't know how to get the execution time. Is there any way to do this?
The programs are "quick", so I need more precision than seconds.
You have at least 4 ways to do it.
(1)
A start point:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main ( void )
{
clock_t start = clock();
system("Test.exe");
printf ("%f\n seconds", ((double)clock() - start) / CLOCKS_PER_SEC);
return 0;
}
(2)
If you are in Windows and you have access to Window APIs you can use GetTickCount() too:
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
int main ( void )
{
DWORD t1 = GetTickCount();
system("Test.exe");
DWORD t2 = GetTickCount();
printf ("%i\n milisecs", t2-t1);
return 0;
}
(3)
And the best is
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
int main(void)
{
LARGE_INTEGER frequency;
LARGE_INTEGER start;
LARGE_INTEGER end;
double interval;
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&start);
system("calc.exe");
QueryPerformanceCounter(&end);
interval = (double) (end.QuadPart - start.QuadPart) / frequency.QuadPart;
printf("%f\n", interval);
return 0;
}
(4)
Question is tagged as C but for sake of completeness, I want add C++11 feature:
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
system("calc.exe");
auto t2 = std::chrono::high_resolution_clock::now();
auto x = std::chrono::duration_cast<std::chrono::nanoseconds>(t2-t1).count();
cout << x << endl;
}
start = clock(); // get number of ticks before loop
/*
Your Program
*/
stop = clock(); // get number of ticks after loop
duration = ( double ) (stop - start ) / CLOCKS_PER_SEC;
I would like to know what lines of C code to add to a program so that it tells me the total time that the program takes to run. I guess there should be counter initialization near the beginning of main and one after the main function ends. Is the right header clock.h?
Thanks a lot...
Update I have a Win Xp machine. Is it just adding clock() at the beginning and another clock() at the end of the program? Then I can estimate the time difference. Yes, you're right it's time.h.
Here's my code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <share.h>
#include <time.h>
void f(long double fb[], long double fA, long double fB);
int main() {
clock_t start, end;
start = clock();
const int ARRAY_SIZE = 11;
long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE);
int i;
long double A, B;
if (z == NULL) {
printf("Out of memory\n");
exit(-1);
}
A = 0.5;
B = 2;
for (i = 0; i < ARRAY_SIZE; i++) {
z[i] = 0;
}
z[1] = 5;
f(z, A, B);
for (i = 0; i < ARRAY_SIZE; i++)
printf("z is %.16Le\n", z[i]);
free(z);
z = NULL;
end = clock();
printf("Took %ld ticks\n", end-start);
printf("Took %f seconds\n", (double)(end-start)/CLOCKS_PER_SEC);
return 0;
}
void f(long double fb[], long double fA, long double fB) {
fb[0] = fb[1]* fA;
fb[1] = fb[1] - 1;
return;
}
Some errors with MVS2008:
testim.c(16) : error C2143: syntax error : missing ';' before 'const'
testim.c(18) :error C2143: syntax error : missing ';' before 'type'
testim.c(20) :error C2143: syntax error : missing ';' before 'type'
testim.c(21) :error C2143: syntax error : missing ';' before 'type'
testim.c(23) :error C2065: 'z' : undeclared identifier
testim.c(23) :warning C4047: '==' : 'int' differs in levels of indirection from 'void *'
testim.c(28) : error C2065: 'A' : undeclared identifier
testim.c(28) : warning C4244: '=' : conversion from 'double' to 'int', possible loss of data
and it goes to 28 errors. Note that I don't have any errors/warnings without your clock codes.
LATEST NEWS: I unfortunately didn't get a good reply here. But after a search on Google, the code is working. Here it is:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
void f(long double fb[], long double fA);
int main() {
clock_t start = clock();
const int ARRAY_SIZE = 11;
long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE);
int i;
long double A;
if (z == NULL) {
printf("Out of memory\n");
exit(-1);
}
A = 0.5;
for (i = 0; i < ARRAY_SIZE; i++) {
z[i] = 0;
}
z[1] = 5;
f(z, A);
for (i = 0; i < ARRAY_SIZE; i++)
printf("z is %.16Le\n", z[i]);
free(z);
z = NULL;
printf("Took %f seconds\n", ((double)clock()-start)/CLOCKS_PER_SEC);
return 0;
}
void f(long double fb[], long double fA) {
fb[0] = fb[1]* fA;
fb[1] = fb[1] - 1;
return;
}
Cheers
Update on April 10: Here's a better solution thanks to "JustJeff"
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
void f(long double fb[], long double fA);
const int ARRAY_SIZE = 11;
int main(void)
{
long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE);
int i;
long double A;
LARGE_INTEGER freq;
LARGE_INTEGER t0, tF, tDiff;
double elapsedTime;
double resolution;
if (z == NULL) {
printf("Out of memory\n");
exit(-1);
}
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&t0);
// code to be timed goes HERE
{
A = 0.5;
for (i = 0; i < ARRAY_SIZE; i++) {
z[i] = 0;
}
z[1] = 5;
f(z, A);
for (i = 0; i < ARRAY_SIZE; i++)
printf("z is %.16Le\n", z[i]);
free(z);
z = NULL;
}
QueryPerformanceCounter(&tF);
tDiff.QuadPart = tF.QuadPart - t0.QuadPart;
elapsedTime = tDiff.QuadPart / (double) freq.QuadPart;
resolution = 1.0 / (double) freq.QuadPart;
printf("Your performance counter ticks %I64u times per second\n", freq.QuadPart);
printf("Resolution is %lf nanoseconds\n", resolution*1e9);
printf("Code under test took %lf sec\n", elapsedTime);
return 0;
}
void f(long double fb[], long double fA) {
fb[0] = fb[1]* fA;
fb[1] = fb[1] - 1;
return;
}
It works both with MVS2008 and with Borland C++ builderX from 2003.
On Unix (I think) systems, the time command with the name of your program as a command-line argument will tell you the time the program takes to run. Note that this measures the execution time of the whole program. If you need to test just one part, include time.h and use the clock function, more or less like this:
#include <time.h>
int main() {
clock_t start;
clock_t end;
int function_time;
start = clock();
function_you_want_to_time();
end = clock();
/* Get time in milliseconds */
function_time = (double)(end - start) / (CLOCKS_PER_SEC / 1000.0);
return 0;
}
That will give you the time in milliseconds (notice the / 1000.0 part). If you want seconds, remove / 1000.0. If you want plain clock ticks, which will be more accurate, make function_time a clock_t and replace the function_time = ... line with:
function_time = end - start;
To time the whole program, I suggest to make a function called _main() or something, move all your program related code from main() (not the timing code!) to that function, and calling it from main(). That way, it's more clear what's the timing code and what's the rest of the program.
If you need a total for your program then in Linux console:
$ time myProgram
You can also use time.h in your code.
#include <time.h>
int main(){
time_t start, end;
start = time(0);
/* some working code */
end = time(0);
printf("%i seconds", end - start );
}
You could use the clock() function (in <time.h>) if you want to test a block of code, or the time program on *nix, as another answerer suggested. E.g.
> time ./foo my args
For clock, you need to subtract the difference between two checkpoints. E.g.
#include <time.h>
void f() {
clock_t start, end;
start = clock();
// some long code.
end = clock();
printf("Took %ld ticks\n", end-start);
// or in (fractional) seconds.
printf("Took %f seconds\n", (double)(end-start)/CLOCKS_PER_SEC);
}
Update
Regarding your new errors, you can't mix code and declarations in VC. You mustn't call any functions then continue to declare variables. Declare all your vars at the top, or compile with C++ mode.
If you're on windows and you want to measure stuff down in the microseconds, investigate QueryPerformanceCounter() and QueryPerformanceFrequency(). On many systems these can resolve full processor clock periods, third of a nanosecond stuff, and I don't believe I've ever seen it any more coarse than 3.5795MHz, still well under a microsecond.
You call QueryPerformanceFrequency() to determine how many counts per second the counter counts. Then call QueryPerformanceCounter() before your code under test, and then again after. Delta the two readings of QPC and divide by the period from QPF and you get the elapsed time between the two QPC calls. Like so ...
LARGE_INTEGER freq;
LARGE_INTEGER t0, tF, tDiff;
double elapsedTime;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&t0);
// code to be timed goes HERE
QueryPerformanceCounter(&tF);
tDiff.QuadPart = tF.QuadPart - t0.QuadPart;
elapsedTime = tDiff.QuadPart / (double) freq.QuadPart;
// elapsedTime now has your measurement, w/resolution given by freq
Evidently these access a hardware counting device that is tied to some system oscillator on the main board, in which case they shouldn't suffer jitter from software load. The resolution you get depends on your system.
FOLLOW UP
Here's a very simple complete program that demonstrates the interface:
#include <windows.h>
int main(void)
{
LARGE_INTEGER freq;
LARGE_INTEGER t0, tF, tDiff;
double elapsedTime;
double resolution;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&t0);
// code to be timed goes HERE
{
Sleep(10);
}
QueryPerformanceCounter(&tF);
tDiff.QuadPart = tF.QuadPart - t0.QuadPart;
elapsedTime = tDiff.QuadPart / (double) freq.QuadPart;
resolution = 1.0 / (double) freq.QuadPart;
printf("Your performance counter ticks %I64u times per second\n", freq.QuadPart);
printf("Resolution is %lf nanoseconds\n", resolution*1e9);
printf("Code under test took %lf sec\n", elapsedTime);
return 0;
}
For something as simple as this, it's quicker to skip the IDE, just save it in a foo.c file and (assuming MS VS 2008) use the command line
cl foo.c
to build it. Here's the output on my system:
Your performance counter ticks 3579545 times per second
Resolution is 279.365115 nanoseconds
Code under test took 0.012519 sec
You probably want time.h, and the clock() function.
You can try GetTickCount also. Clock will also work fine. But, I guess clock values will change if some other process or some one manually changes the system time, wheareas GetTickCount values are not affected by that.