I have written a very small code to measure the time taken by my multiplication algorithm :
clock_t begin, end;
float time_spent;
begin = clock();
a = b*c;
end = clock();
time_spent = (float)(end - begin)/CLOCKS_PER_SEC;
I am working with mingw under Windows.
I am guessing that end = clock() will give me the clock ticks at that particular moment. Subtracting it from begin will give me clock ticks consumed by multiplication. When I divide with CLOCKS_PER_SEC, I will get the total amount of time.
My first question is: Is there a difference between clock ticks and clock cycle?
My algorithm here is so small that the difference end-begin is 0. Does this mean that my code execution time was less than 1 tick and that's why I am getting zero?
My first question is: Is there a difference between clock ticks and clock cycle?
Yes. A clock tick could be 1 millisecond or microsecond while the clock cycle could be 0.3 nanoseconds. On POSIX systems CLOCKS_PER_SEC must be defined as 1000000 (1 million). Note that if the CPU measurement cannot be obtained with microsecond resolution then the smallest jump in the return value from clock() will be larger than one.
My algorithm here is so small that the difference end-begin is 0. Does this mean that my code execution time was less than 1 tick and that's why I am getting zero?
Yes. To get a better reading I suggest that you loop enough iterations so that you measure over several seconds.
Answering the difference between clock tick and clock cycle from a systems perspective
Every processor is accompanied by a physical clock (usually quartz crystal clock), which oscillates at certain frequency (vibrations/sec). The processor keeps track of time by the help of interrupts generated from the physical clock, which interrupts the processor at every time period T. This interrupt is called a 'clock tick'. CPU counts the number of interrupts it has seen since the system has started, and returns that value when you call clock(). By taking a difference between two clock ticks values (obtained from clock()), you would get how many interrupts that were seen between those two time points.
Most of the modern operating systems program the T value to be 1 microsecond i.e. the physical clock interrupts at every 1 microsecond, this is the lowest clock granularity which is widely supported by most of the physical clocks. With 1 microsecond as T, the clock cycle can be calculated as 1000000 per second. So, with this information, you can calculate the time elapsed from the difference of two clock ticks values i.e. diff between two ticks * tick period
NOTE: clock cycle defined by the OS has to be <= vibrations/sec on the physical clock, otherwise there will be a loss of precision
four your first question: clock ticks refer to the main system clock. It is the smallest unit of time recognized by the device. clock cycle is the time taken for a full processor pulse to complete. this u can recognize by your cpu cpeed given in Hz. a 2GHz processor performs 2,000,000,000 clock cycles per second.
for your second question: probably yes.
A clock cycle is a clock tick.
A clock cycle is the speed of a computer processor, or CPU, and is determined by the amount of time between two pulses of an oscillator. Generally speaking, the higher number of pulses per second, the faster the computer processor will be able to process information.
Related
What's the relationship between the real CPU frequency and the clock_t (the unit is clock tick) in C?
Let's say I have the below piece of C code which measures the time that CPU consumed for running a for loop.
But since the CLOCKS_PER_SEC is a constant value (basically 1000,000) in the C standard library, I wonder how the clock function does measure the real CPU cycles that are consumed by the program while it runs on different computers with different CPU frequencies (for my laptop, it is 2.6GHz).
And if they are not relevant, how does the CPU timer work in the mentioned scenario?
#include <time.h>
#include <stdio.h>
int main(void) {
clock_t start_time = clock();
for(int i = 0; i < 10000; i++) {}
clock_t end_time = clock();
printf("%fs\n", (double)(end_time - start_time) / CLOCKS_PER_SEC);
return 0;
}
Effectively, clock_t values are unrelated to the CPU frequency.
See longer explanation here.
While clock_t-type values could have, in theory, represented actual physical CPU clock ticks - in practice, they do not: POSIX mandates that CLOCKS_PER_SEC be equal to 1,000,000 - one million. Thus the clock_t function returns a value in microseconds.
There is no such thing as "the real CPU frequency". Not in everyone's laptop, at any rate.
On many systems, the OS can lower and raise the CPU clock speed as it sees fit. On some systems there is more than one kind of central processor or core, each with a different speed. Some CPUs are clockless (asynchronous).
Because of all this and for other reasons, most computers measure time with a separate clock device, independent from the CPU clock (if any).
For providing the information used in the shown code, measuring/knowing/using the CPU cycles is not relevant.
For providing the elapsed time, it is only necessary to measure the time.
Reading a hardware timer would be one way to do so.
Most computers (even non-embedded ones) do contain timers which are especially counting ticks of a clock with known constant frequency. (They are specifically not "CPU timers".)
Such a timer can be read and yields a value which increases once per tick (of constant period). Where "known period" means a period know to some appropriate driver for that timer, simplified "known to the clock() function, not necessarily known to you".
Note that even if the number of used CPU cycles were known, calculating the elapsed time from that info is near impossible nowadays, in the presence of:
pipelines
parallelisms
interrupts
branch prediction
More things influencing/preventing the calculation, from comment contribution:
frequency-scaling, temperature throttling and power settings
(David C. Rankin)
I'm very new to AVR and got confused about a question from one of our tutorials. It says:
"To toggle an output compare pin 8 times per second (4Hz period), what clock prescale and output output compare values do we need?"
My confusion is:
why it says "4Hz period"? isn't Hertz a measurement for frequency? why is it describing time period?
You are correct Hz is the unit for frequency. Frequency is changes per second so if you toggle a pin 4 times per second its frequency is 4Hz. The author was problably lazy and did not want to calculate the period time to 1/4 s
Hertz represents >> cycles per seconds in Metrical System SI system.
As mentioned above it tells you how many times a process, event anything else takes place. Mostly everything is measured in Hz because its convenient and has direct relation to time. So one can easily jump back and forth in units.
For the AVR has a system clock which is then prescaled to lower rates for use in different system peripherals. The event loop speed is also measured in Hertz. Currently on mine the loop execution time is 2.7 MHz so very fast loop.
I tried to measure the time of some operations on the telosB platform. For that I wanted to count the clock ticks of the processor with the clock() function from time.h but it does not compile on contiki.
Are there mechanisms to measure passed time, preferably in actual clock ticks, on contiki?
Regards
The latest timer documentation is here: https://github.com/contiki-ng/contiki-ng/wiki/Documentation:-Timers
You can use the clock_ticks() function. However, the resolution of those is quite low (1/128 of second). If you want measure shorter time intervals, use rtimers: RTIMER_NOW() returns the time as 16-bit integer, with platform-specific resolution. On most platforms the rtimer clock has 32678 ticks per second, but on CC26xx/CC13xx platforms it has 65536 ticks per second.
See also: Contiki difference between RTIMER_NOW() and clock_time()
I am trying to implement my own version of clock() using asm and rdtsc. However I am quite unsure about its return value. Is it cycles? Oder is it micro seconds?
I am also confused about CLOCKS_PER_SEC. How can this be constant?
Is there any kind of formula which sets these values into relation?
You can find a rdtsc reference implementation here:
https://github.com/LITMUS-RT/liblitmus/blob/master/arch/x86/include/asm/cycles.h
TSC counts the number of cycles since reset. If you need a time value unit in seconds, you also need to read the CPU clock frequency and divide TSC value by frequency. However, this may not be accurate if the CPU frequency scaling is enabled. Recent Intel processors include a constant rate TSC (identified by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states.
https://en.wikipedia.org/wiki/Time_Stamp_Counter
I have some sectors on my drive with poor reading. I could measure the reading time required by each sector and then compare the time of the good sectors and the bad sectors.
I could use a timer of the processor to make the measurements.
How do I write a program in C/Assembly that measures the exact time it takes for each sector to be read?
So the procedure would be something like this:
Start the timer
Read the disk sector
Stop the timer
Read the time measured by the timer
The most useful functionality is the "rdtsc" instruction (ReaD Time Stamp Counter) which is incremented every time the processor's internal clock increments. For a 3 Ghz processor it increments 3 billion times per second. It returns a 64 bit unsigned integer containing the number of clock cycles since the processor was powered on.
Obviously the difference between two read-outs is the number of elapsed clock cycles consumed for executing the code sequence in-between. For a 3 Ghz machine you could use any of the following algorithms to convert to parts of seconds:
(time_difference+150)/300 gives a rounded off elapsed time in 0.1 us (tenths of microseconds)
(time_difference+1500)/3000 gives a rounded off elapsed time in us (microseconds)
(time_difference+1500000/3000000 gives a rounded off elapsed time in ms (milliseconds)
The 0.1 us algorithm is the most precise value you can use without having to adjust for read-out overhead.
In C, the function that would be most useful is clock() in time.h.
To time something, put calls to clock() around it, like so:
clock_t start, end;
float elapsed_time;
start = clock();
read_disk_sector();
end = clock();
elapsed_time = (float)(end - start) / (float)CLOCKS_PER_SEC;
printf("Elapsed time: %f seconds\n", elapsed_time);
This code prints out the number of seconds the read_disk_sector() function call took.
You can read more about the clock function here:
http://www.cplusplus.com/reference/clibrary/ctime/clock/