Related
I am trying to create a simple queue schedule for an embedded System in C.
The idea is that within a Round Robin some functions are called based on the time constraints declared in the Tasks[] array.
#include <time.h>
#include <stdio.h>
#include <windows.h>
#include <stdint.h>
//Constants
#define SYS_TICK_INTERVAL 1000UL
#define INTERVAL_0MS 0
#define INTERVAL_10MS (100000UL / SYS_TICK_INTERVAL)
#define INTERVAL_50MS (500000UL / SYS_TICK_INTERVAL)
//Function calls
void task_1(clock_t tick);
void task_2(clock_t tick);
uint8_t get_NumberOfTasks(void);
//Define the schedule structure
typedef struct
{
double Interval;
double LastTick;
void (*Function)(clock_t tick);
}TaskType;
//Creating the schedule itself
TaskType Tasks[] =
{
{INTERVAL_10MS, 0, task_1},
{INTERVAL_50MS, 0, task_2},
};
int main(void)
{
//Get the number of tasks to be executed
uint8_t task_number = get_NumberOfTasks();
//Initializing the clocks
for(int i = 0; i < task_number; i++)
{
clock_t myClock1 = clock();
Tasks[i].LastTick = myClock1;
printf("Task %d clock has been set to %f\n", i, myClock1);
}
//Round Robin
while(1)
{
//Go through all tasks in the schedule
for(int i = 0; i < task_number; i++)
{
//Check if it is time to execute it
if((Tasks[i].LastTick - clock()) > Tasks[i].Interval)
{
//Execute it
clock_t myClock2 = clock();
(*Tasks[i].Function)(myClock2);
//Update the last tick
Tasks[i].LastTick = myClock2;
}
}
Sleep(SYS_TICK_INTERVAL);
}
}
void task_1(clock_t tick)
{
printf("%f - Hello from task 1\n", tick);
}
void task_2(clock_t tick)
{
printf("%f - Hello from task 2\n", tick);
}
uint8_t get_NumberOfTasks(void)
{
return sizeof(Tasks) / sizeof(*Tasks);
}
The code compiles without a single warning, but I guess I don't understand how the command clock() work.
Here you can see what I get when I run the program:
F:\AVR Microcontroller>timer
Task 0 clock has been set to 0.000000
Task 1 clock has been set to 0.000000
I tried changing Interval and LastTick from float to double just to make sure this was not a precision error, but still it does not work.
%f is not the right formatting specifier to print out myClock1 as clock_t is likely not double. You shouldn't assume that clock_t is double. If you want to print myClock1 as a floating point number you have to manually convert it to double:
printf("Task %d clock has been set to %f\n", i, (double)myClock1);
Alternatively, use the macro CLOCKS_PER_SEC to turn myClock1 into a number of seconds:
printf("Task %d clock has been set to %f seconds\n", i,
(double)myClock1 / CLOCKS_PER_SEC);
Additionally, your subtraction in the scheduler loop is wrong. Think about it: clock() grows larger with the time, so Tasks[i].LastTick - clock() always yields a negative value. I think you want clock() - Tasks[i].LastTick instead.
The behavior of the clock function is depending on the operating system. On Windows it basically runs of the wall clock, while on e.g. Linux it's the process CPU time.
Also, the result of clock by itself is useless, it's only use is in comparison between two clocks (e.g. clock_end - clock_start).
Finally, the clock_t type (which clock returns) is an integer type, you only get floating point values if you cast a difference (as the one above) to e.g. double and divide by CLOCKS_PER_SEC. Attempting to print a clock_t using the "%f" format will lead to undefined behavior.
Reading a clock reference might help.
I have simple function which takes random words and puts them in lexicographical order using insertion sort algorithm.I have no problem with function(It works,tested),but when i try to measure execution time of function using two different clock() values, i get same values before and after the execution of function,so it shows 0 as elapsed time
clock_t t1 = clock();
InsertionSort(data, n);
clock_t t2 = clock();
/*
* Display the results.
*/
for (size = i, i = 0; i < size; ++i)
{
printf("data[%d] = \"%s\"\n", (int)i, data[i]);
}
/*
* Display the execution time
*/
printf("The time taken is.. %g ", (t2 -t1));
The time difference is too small to be measured by this method, without adding more code to execute. – Weather Vane
Usually, you contrive a way to measure a large number of loops of what you want to time. 10, 100, 1000, whatever produces a significant result. Bear in mind too that on a multi-tasking OS each iteration will take a slightly different time, and so you'll also establish a typical average.The result might also be affected by processor caching and/or file caching. – Weather Vane
Try like this:
#include <sys/types.h>
#include <sys/time.h>
#include <stdlib.h>
#include <stdio.h>
double gettime(void)
{
struct timediff td;
double d=0;
gettimeofday(&td, NULL);
d=td.td_usec;
d+= (double)td.td_usecs / 1000000.;
return d;
}
double t1=gettime();
InsertionSort(data, n);
printf("%.6f", gettime() - t1);
or may be you need to change your code like this:
clock_t t1 = clock();
InsertionSort(data, n);
clock_t t2 = clock();
double d= double(t2- t1) / CLOCKS_PER_SEC;
You can also refer: Easily measure elapsed time
You are incorrectly using the floating-point format specifier %g. Try this
printf("The time taken is.. %u clock ticks", (unsigned)(t2 -t1));
Always assuming the execution time is longer than the granularity of clock().
I am running a C program using GCC and a proprietary DSP cross-compiler to simulate some functioality. I am using the following code to measure the execution time of particular part of my program:
clock_t start,end;
printf("DECODING DATA:\n");
start=clock();
conv3_dec(encoded, decoded,3*length,0);
end=clock();
duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("DECODING TIME = %f\n",duration);
where conv3_dec() is a function defined in my program and I want to find the run-time of this function.
Now the thing is when my program runs, the conv3_dec() functions runs for almost 2 hours but the output from the printf("DECODING TIME = %f\n",duration) says that the execution of the function finished in just half a second (DECODING TIME = 0.455443) . This is very confusing for me.
I have used the clock_t technique to measure the runtimes of programs previously but the difference has never been so huge. Is this being caused by the cross-compiler. Just as a side note, the simulator simulates a DSP processor running at just 500MHz, so is the difference in the clock speeds of the DSP processor and my CPU causing the error is measuring the CLOCKS_PER_SEC.
clock measures the cpu time and not the wallclock time. Since you are not running the majority of your code on the cpu, this is not the right tool.
For durations like two hours, I wouldn't be too concerned about clock(), it's far more useful for measuring sub-second durations.
Just use time() if you want the actual elapsed time, something like (dummy stuff supplied for what was missing):
#include <stdio.h>
#include <time.h>
// Dummy stuff starts here
#include <unistd.h>
#define encoded 0
#define decoded 0
#define length 0
static void conv3_dec (int a, int b, int c, int d) {
sleep (20);
}
// Dummy stuff ends here
int main (void) {
time_t start, end, duration;
puts ("DECODING DATA:");
start = time (0);
conv3_dec (encoded, decoded, 3 * length, 0);
end = time (0);
duration = end - start;
printf ("DECODING TIME = %d\n", duration);
return 0;
}
which generates:
DECODING DATA:
DECODING TIME = 20
gettimeofday() function also can be considered.
The gettimeofday() function shall obtain the current time, expressed as seconds and microseconds since the Epoch, and store it in the timeval structure pointed to by tp. The resolution of the system clock is unspecified.
Calculating elapsed time in a C program in milliseconds
http://www.ccplusplus.com/2011/11/gettimeofday-example.html
I have a C program that aims to be run in parallel on several processors. I need to be able to record the execution time (which could be anywhere from 1 second to several minutes). I have searched for answers, but they all seem to suggest using the clock() function, which then involves calculating the number of clocks the program took divided by the Clocks_per_second value.
I'm not sure how the Clocks_per_second value is calculated?
In Java, I just take the current time in milliseconds before and after execution.
Is there a similar thing in C? I've had a look, but I can't seem to find a way of getting anything better than a second resolution.
I'm also aware a profiler would be an option, but am looking to implement a timer myself.
Thanks
CLOCKS_PER_SEC is a constant which is declared in <time.h>. To get the CPU time used by a task within a C application, use:
clock_t begin = clock();
/* here, do your time-consuming job */
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
Note that this returns the time as a floating point type. This can be more precise than a second (e.g. you measure 4.52 seconds). Precision depends on the architecture; on modern systems you easily get 10ms or lower, but on older Windows machines (from the Win98 era) it was closer to 60ms.
clock() is standard C; it works "everywhere". There are system-specific functions, such as getrusage() on Unix-like systems.
Java's System.currentTimeMillis() does not measure the same thing. It is a "wall clock": it can help you measure how much time it took for the program to execute, but it does not tell you how much CPU time was used. On a multitasking systems (i.e. all of them), these can be widely different.
If you are using the Unix shell for running, you can use the time command.
doing
$ time ./a.out
assuming a.out as the executable will give u the time taken to run this
In plain vanilla C:
#include <time.h>
#include <stdio.h>
int main()
{
clock_t tic = clock();
my_expensive_function_which_can_spawn_threads();
clock_t toc = clock();
printf("Elapsed: %f seconds\n", (double)(toc - tic) / CLOCKS_PER_SEC);
return 0;
}
You functionally want this:
#include <sys/time.h>
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
/* stuff to do! */
gettimeofday(&tv2, NULL);
printf ("Total time = %f seconds\n",
(double) (tv2.tv_usec - tv1.tv_usec) / 1000000 +
(double) (tv2.tv_sec - tv1.tv_sec));
Note that this measures in microseconds, not just seconds.
Most of the simple programs have computation time in milli-seconds. So, i suppose, you will find this useful.
#include <time.h>
#include <stdio.h>
int main(){
clock_t start = clock();
// Execuatable code
clock_t stop = clock();
double elapsed = (double)(stop - start) * 1000.0 / CLOCKS_PER_SEC;
printf("Time elapsed in ms: %f", elapsed);
}
If you want to compute the runtime of the entire program and you are on a Unix system, run your program using the time command like this time ./a.out
(All answers here are lacking, if your sysadmin changes the systemtime, or your timezone has differing winter- and sommer-times. Therefore...)
On linux use: clock_gettime(CLOCK_MONOTONIC_RAW, &time_variable);
It's not affected if the system-admin changes the time, or you live in a country with winter-time different from summer-time, etc.
#include <stdio.h>
#include <time.h>
#include <unistd.h> /* for sleep() */
int main() {
struct timespec begin, end;
clock_gettime(CLOCK_MONOTONIC_RAW, &begin);
sleep(1); // waste some time
clock_gettime(CLOCK_MONOTONIC_RAW, &end);
printf ("Total time = %f seconds\n",
(end.tv_nsec - begin.tv_nsec) / 1000000000.0 +
(end.tv_sec - begin.tv_sec));
}
man clock_gettime states:
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified starting point. This clock is not affected by discontinuous jumps in the system time
(e.g., if the system administrator manually changes the clock), but is affected by the incremental adjustments performed by adjtime(3) and NTP.
Thomas Pornin's answer as macros:
#define TICK(X) clock_t X = clock()
#define TOCK(X) printf("time %s: %g sec.\n", (#X), (double)(clock() - (X)) / CLOCKS_PER_SEC)
Use it like this:
TICK(TIME_A);
functionA();
TOCK(TIME_A);
TICK(TIME_B);
functionB();
TOCK(TIME_B);
Output:
time TIME_A: 0.001652 sec.
time TIME_B: 0.004028 sec.
A lot of answers have been suggesting clock() and then CLOCKS_PER_SEC from time.h. This is probably a bad idea, because this is what my /bits/time.h file says:
/* ISO/IEC 9899:1990 7.12.1: <time.h>
The macro `CLOCKS_PER_SEC' is the number per second of the value
returned by the `clock' function. */
/* CAE XSH, Issue 4, Version 2: <time.h>
The value of CLOCKS_PER_SEC is required to be 1 million on all
XSI-conformant systems. */
# define CLOCKS_PER_SEC 1000000l
# if !defined __STRICT_ANSI__ && !defined __USE_XOPEN2K
/* Even though CLOCKS_PER_SEC has such a strange value CLK_TCK
presents the real value for clock ticks per second for the system. */
# include <bits/types.h>
extern long int __sysconf (int);
# define CLK_TCK ((__clock_t) __sysconf (2)) /* 2 is _SC_CLK_TCK */
# endif
So CLOCKS_PER_SEC might be defined as 1000000, depending on what options you use to compile, and thus it does not seem like a good solution.
#include<time.h>
#include<stdio.h>
int main(){
clock_t begin=clock();
int i;
for(i=0;i<100000;i++){
printf("%d",i);
}
clock_t end=clock();
printf("Time taken:%lf",(double)(end-begin)/CLOCKS_PER_SEC);
}
This program will work like charm.
You have to take into account that measuring the time that took a program to execute depends a lot on the load that the machine has in that specific moment.
Knowing that, the way of obtain the current time in C can be achieved in different ways, an easier one is:
#include <time.h>
#define CPU_TIME (getrusage(RUSAGE_SELF,&ruse), ruse.ru_utime.tv_sec + \
ruse.ru_stime.tv_sec + 1e-6 * \
(ruse.ru_utime.tv_usec + ruse.ru_stime.tv_usec))
int main(void) {
time_t start, end;
double first, second;
// Save user and CPU start time
time(&start);
first = CPU_TIME;
// Perform operations
...
// Save end time
time(&end);
second = CPU_TIME;
printf("cpu : %.2f secs\n", second - first);
printf("user : %d secs\n", (int)(end - start));
}
Hope it helps.
Regards!
ANSI C only specifies second precision time functions. However, if you are running in a POSIX environment you can use the gettimeofday() function that provides microseconds resolution of time passed since the UNIX Epoch.
As a side note, I wouldn't recommend using clock() since it is badly implemented on many(if not all?) systems and not accurate, besides the fact that it only refers to how long your program has spent on the CPU and not the total lifetime of the program, which according to your question is what I assume you would like to measure.
I've found that the usual clock(), everyone recommends here, for some reason deviates wildly from run to run, even for static code without any side effects, like drawing to screen or reading files. It could be because CPU changes power consumption modes, OS giving different priorities, etc...
So the only way to reliably get the same result every time with clock() is to run the measured code in a loop multiple times (for several minutes), taking precautions to prevent the compiler from optimizing it out: modern compilers can precompute the code without side effects running in a loop, and move it out of the loop., like i.e. using random input for each iteration.
After enough samples are collected into an array, one sorts that array, and takes the middle element, called median. Median is better than average, because it throws away extreme deviations, like say antivirus taking up all CPU up or OS doing some update.
Here is a simple utility to measure execution performance of C/C++ code, averaging the values near median: https://github.com/saniv/gauge
I'm myself still looking for a more robust and faster way to measure code. One could probably try running the code in controlled conditions on bare metal without any OS, but that will give unrealistic result, because in reality OS does get involved.
x86 has these hardware performance counters, which including the actual number of instructions executed, but they are tricky to access without OS help, hard to interpret and have their own issues ( http://archive.gamedev.net/archive/reference/articles/article213.html ). Still they could be helpful investigating the nature of the bottle neck (data access or actual computations on that data).
Every solution's are not working in my system.
I can get using
#include <time.h>
double difftime(time_t time1, time_t time0);
Some might find a different kind of input useful: I was given this method of measuring time as part of a university course on GPGPU-programming with NVidia CUDA (course description). It combines methods seen in earlier posts, and I simply post it because the requirements give it credibility:
unsigned long int elapsed;
struct timeval t_start, t_end, t_diff;
gettimeofday(&t_start, NULL);
// perform computations ...
gettimeofday(&t_end, NULL);
timeval_subtract(&t_diff, &t_end, &t_start);
elapsed = (t_diff.tv_sec*1e6 + t_diff.tv_usec);
printf("GPU version runs in: %lu microsecs\n", elapsed);
I suppose you could multiply with e.g. 1.0 / 1000.0 to get the unit of measurement that suits your needs.
If you program uses GPU or if it uses sleep() then clock() diff gives you smaller than actual duration. It is because clock() returns the number of CPU clock ticks. It only can be used to calculate CPU usage time (CPU load), but not the execution duration. We should not use clock() to calculate duration. We still should use gettimeofday() or clock_gettime() for duration in C.
perf tool is more accurate to be used in order to collect and profile the running program. Use perf stat to show all information related to the program being executed.
As simple as possible by using function-like macro
#include <stdio.h>
#include <time.h>
#define printExecTime(t) printf("Elapsed: %f seconds\n", (double)(clock()-(t)) / CLOCKS_PER_SEC)
int factorialRecursion(int n) {
return n == 1 ? 1 : n * factorialRecursion(n-1);
}
int main()
{
clock_t t = clock();
int j=1;
for(int i=1; i <10; i++ , j*=i);
printExecTime(t);
// compare with recursion factorial
t = clock();
j = factorialRecursion(10);
printExecTime(t);
return 0;
}
Comparison of execution time of bubble sort and selection sort
I have a program which compares the execution time of bubble sort and selection sort.
To find out the time of execution of a block of code compute the time before and after the block by
clock_t start=clock();
…
clock_t end=clock();
CLOCKS_PER_SEC is constant in time.h library
Example code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main()
{
int a[10000],i,j,min,temp;
for(i=0;i<10000;i++)
{
a[i]=rand()%10000;
}
//The bubble Sort
clock_t start,end;
start=clock();
for(i=0;i<10000;i++)
{
for(j=i+1;j<10000;j++)
{
if(a[i]>a[j])
{
int temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
}
end=clock();
double extime=(double) (end-start)/CLOCKS_PER_SEC;
printf("\n\tExecution time for the bubble sort is %f seconds\n ",extime);
for(i=0;i<10000;i++)
{
a[i]=rand()%10000;
}
clock_t start1,end1;
start1=clock();
// The Selection Sort
for(i=0;i<10000;i++)
{
min=i;
for(j=i+1;j<10000;j++)
{
if(a[min]>a[j])
{
min=j;
}
}
temp=a[min];
a[min]=a[i];
a[i]=temp;
}
end1=clock();
double extime1=(double) (end1-start1)/CLOCKS_PER_SEC;
printf("\n");
printf("\tExecution time for the selection sort is %f seconds\n\n", extime1);
if(extime1<extime)
printf("\tSelection sort is faster than Bubble sort by %f seconds\n\n", extime - extime1);
else if(extime1>extime)
printf("\tBubble sort is faster than Selection sort by %f seconds\n\n", extime1 - extime);
else
printf("\tBoth algorithms have the same execution time\n\n");
}
I want to find out for how long (approximately) some block of code executes. Something like this:
startStopwatch();
// do some calculations
stopStopwatch();
printf("%lf", timeMesuredInSeconds);
How?
You can use the clock method in time.h
Example:
clock_t start = clock();
/*Do something*/
clock_t end = clock();
float seconds = (float)(end - start) / CLOCKS_PER_SEC;
You can use the time.h library, specifically the time and difftime functions:
/* difftime example */
#include <stdio.h>
#include <time.h>
int main ()
{
time_t start,end;
double dif;
time (&start);
// Do some calculation.
time (&end);
dif = difftime (end,start);
printf ("Your calculations took %.2lf seconds to run.\n", dif );
return 0;
}
(Example adapted from the difftime webpage linked above.)
Please note that this method can only give seconds worth of accuracy - time_t records the seconds since the UNIX epoch (Jan 1st, 1970).
Sometime it's needed to measure astronomical time rather than CPU time (especially this applicable on Linux):
#include <time.h>
double what_time_is_it()
{
struct timespec now;
clock_gettime(CLOCK_REALTIME, &now);
return now.tv_sec + now.tv_nsec*1e-9;
}
int main() {
double time = what_time_is_it();
printf("time taken %.6lf\n", what_time_is_it() - time);
return 0;
}
The standard C library provides the time function and it is useful if you only need to compare seconds. If you need millisecond precision, though, the most portable way is to call timespec_get. It can tell time up to nanosecond precision, if the system supports. Calling it, however, takes a bit more effort because it involves a struct. Here's a function that just converts the struct to a simple 64-bit integer.
#include <stdio.h>
#include <inttypes.h>
#include <time.h>
int64_t millis()
{
struct timespec now;
timespec_get(&now, TIME_UTC);
return ((int64_t) now.tv_sec) * 1000 + ((int64_t) now.tv_nsec) / 1000000;
}
int main(void)
{
printf("Unix timestamp with millisecond precision: %" PRId64 "\n", millis());
}
Unlike clock, this function returns a Unix timestamp so it will correctly account for the time spent in blocking functions, such as sleep. This is a useful property for benchmarking and implementing delays that take running time into account.
GetTickCount().
#include <windows.h>
void MeasureIt()
{
DWORD dwStartTime = GetTickCount();
DWORD dwElapsed;
DoSomethingThatYouWantToTime();
dwElapsed = GetTickCount() - dwStartTime;
printf("It took %d.%3d seconds to complete\n", dwElapsed/1000, dwElapsed - dwElapsed/1000);
}
I would use the QueryPerformanceCounter and QueryPerformanceFrequency functions of the Windows API. Call the former before and after the block and subtract (current − old) to get the number of "ticks" between the instances. Divide this by the value obtained by the latter function to get the duration in seconds.
For sake of completeness, there is more precise clock counter than GetTickCount() or clock() which gives you only 32-bit result that can overflow relatively quickly. It's QueryPerformanceCounter(). QueryPerformanceFrequency() gets clock frequency which is a divisor for two counters difference. Something like CLOCKS_PER_SEC in <time.h>.
#include <stdio.h>
#include <windows.h>
int main()
{
LARGE_INTEGER tu_freq, tu_start, tu_end;
__int64 t_ns;
QueryPerformanceFrequency(&tu_freq);
QueryPerformanceCounter(&tu_start);
/* do your stuff */
QueryPerformanceCounter(&tu_end);
t_ns = 1000000000ULL * (tu_end.QuadPart - tu_start.QuadPart) / tu_freq.QuadPart;
printf("dt = %g[s]; (%llu)[ns]\n", t_ns/(double)1e+9, t_ns);
return 0;
}
If you don't need fantastic resolution, you could use GetTickCount(): http://msdn.microsoft.com/en-us/library/ms724408(VS.85).aspx
(If it's for something other than your own simple diagnostics, then note that this number can wrap around, so you'll need to handle that with a little arithmetic).
QueryPerformanceCounter is another reasonable option. (It's also described on MSDN)