I'm sure the answer is simple, but I don't quite get it. I'm trying to calculate the delta between two struct timespec using this code:
struct timespec start, finish, diff;
int ndiff;
/* Structs are filled somewhere else */
diff.tv_sec = finish.tv_sec - start.tv_sec;
ndiff = finish.tv_nsec - start.tv_nsec;
if (ndiff < 0) {
diff.tv_sec--;
ndiff = 1L - ndiff;
}
diff.tv_nsec = ndiff;
printf("Elapsed time: %ld.%ld seconds.\n", diff.tv_sec, diff.tv_nsec);
However, the output is always something like Elapsed time: 0.300876000 seconds. which seems to indicate that I'm losing the last three digits of the nanoseconds (since those shouldn't always be zero). Can someone point out what's causing that?
Elapsed time: 0.300876000 seconds. which seems to indicate that I'm losing the last three digits of the nanoseconds (since those shouldn't always be zero). Can someone point out what's causing that?
The code's clock reported precision is 1000 ns. #John Bollinger #rici
and/or
diff.tv_sec is not necessarily a long. Use a matching specifier.
// printf("Elapsed time: %ld.%ld seconds.\n", diff.tv_sec, diff.tv_nsec);
// Also insure fraction is printed with 9 digits
printf("Elapsed time: %lld.%09ld seconds.\n", (long long) diff.tv_sec, diff.tv_nsec);
Also, incorrect "borrow" math when updating the ndiff.
ndiff = finish.tv_nsec - start.tv_nsec;
if (ndiff < 0) {
diff.tv_sec--;
// ndiff = 1L - ndiff;
ndiff += 1000000000;
}
Even better, drop the int diff variable.
diff.tv_sec = finish.tv_sec - start.tv_sec;
diff.tv_nsec = finish.tv_nsec - start.tv_nsec;
if (diff.tv_nsec < 0) {
diff.tv_sec--;
diff.tv_nsec += 1000000000;
}
Should finish occur before start, then other code may be desired to keep the 2 members of diff with the same sign.
Related
I am working with an averaging function following the formula
new average = old average * (n-1) / n + (new value / n)
When passing in doubles this works great. My example code for a proof of concept is as follows.
double avg = 0;
uint16_t i;
for(i=1; i<10; i++) {
int32_t new_value = i;
avg = avg*(i-1);
avg /= i;
avg += new_value/i;
printf("I %d New value %d Avg %f\n",i, new_value, avg);
}
In my program I am keeping track of messages received. Each time I see a message its hit count is increased by 1, it is them timestamped using a timespec. My goal is to keep a moving average (like above) of the average time between messages of a certain type being received.
My initial attempt was to average the tv_nsec and tv_sec separately as follows
static int32_t calc_avg(const int32_t current_avg, const int32_t new_value, const uint64_t n) {
int32_t new__average = current_avg;
new__average = new__average*(n-1);
new__average /= n;
new__average += new_value/n;
return new__average;
}
void average_timespec(struct timespec* average, const struct timespec new_sample, const uint64_t n) {
if(n > 0) {
average->tv_nsec = calc_avg(average->tv_nsec, new_sample.tv_nsec, n);
average->tv_sec = calc_avg(average->tv_sec, new_sample.tv_sec, n);
}
}
My issue is I am using integers, the values are always rounded down and my averages are way off. Is there a smarter/easier way to average the time between timespec readings?
Below is some code that I've used a lot [in production S/W] for years.
The main idea is that just because clock_gettime uses struct timespec does not mean this has to be "carried around" everywhere:
It's easier to convert to a long long or double and propagate those values as soon as they're gotten from clock_gettime.
All further math is simple add/subtract, etc.
The overhead of the clock_gettime call dwarfs the multiply/divide time in the conversion.
Whether I use the fixed nanosecond value or the fractional seconds value depends upon the exact application.
In your case, I'd probably use the double since you already have calculations that work for that.
Anyway, this is what I use:
#include <time.h>
typedef long long tsc_t; // timestamp in nanoseconds
#define TSCSEC 1000000000LL
#define TSCSECF 1e9
tsc_t tsczero; // initial start time
double tsczero_f; // initial start time
// tscget -- get number of nanoseconds
tsc_t
tscget(void)
{
struct timespec ts;
tsc_t tsc;
clock_gettime(CLOCK_MONOTONIC,&ts);
tsc = ts.tv_sec;
tsc *= TSCSEC;
tsc += ts.tv_nsec;
tsc -= tsczero;
return tsc;
}
// tscgetf -- get fractional number of seconds
double
tscgetf(void)
{
struct timespec ts;
double sec;
clock_gettime(CLOCK_MONOTONIC,&ts);
sec = ts.tv_nsec;
sec /= TSCSECF;
sec += ts.tv_sec;
sec -= tsczero_f;
return sec;
}
// tscsec -- convert tsc value to [fractional] seconds
double
tscsec(tsc_t tsc)
{
double sec;
sec = tsc;
sec /= TSCSECF;
return sec;
}
// tscinit -- initialize base time
void
tscinit(void)
{
tsczero = tscget();
tsczero_f = tscsec(tsczero);
}
Use better integer math.
Use signed math if new_value < 0 possible, else int64_t cast not needed below.
Form the sum first and then divide.
Round.
Sample code:
// new__average = new__average*(n-1);
// new__average /= n;
// new__average += new_value/n;
// v-------------------------------------v Add first
new__average = (new__average*((int64_t)n-1) + new_value + n/2)/n;
// Add n/2 to effect rounding ^-^
On review, the whole idea of doing averages in 2 parts is flawed. Instead use a 64-bit count of nanoseconds. Good until the year 2263.
Suggested code:
void average_timespec(int64_t* average, struct timespec new_sample, int64_t n) {
if (n > 0) {
int64_t t = new_sample.tv_sec + new_sample.tv_nsec*(int64_t)1000000000;
*average = (*average*(n-1) + t + n/2)/n;
}
}
If you must form a struct timespec from the average, easy to do when average >= 0.
int64_t average;
average_timespec(&average, new_sample, n);
struct timespec avg_ts = (struct timespec){.tm_sec = average/1000000000,
.tm_nsec = average%1000000000);
i need a 60HZ timer (16.6 ms trigger once)
it work well in windows(mingw gcc) but not in liunx(gcc)
can anyone help me abust this? THX
#include <stdio.h>
#include <time.h>t
#define PRE_MS CLOCKS_PER_SEC / 1000
int main()
{
clock_t pre = clock();
int cnt = 0;
printf("CLOCKS_PER_SEC = %d\n", CLOCKS_PER_SEC);
while (1)
{
clock_t diff = clock() - pre;
if (diff > 16 * PRE_MS)
{
cnt++;
if (cnt > 60)
{
printf("%d\n", (int)pre);
cnt = 0;
}
pre += diff;
}
}
}
printf pre 1s in windows
CLOCKS_PER_SEC = 1000
1020
2058
3095
4132
5169
6206
7243
8280
9317
printf pre 2s in linux
CLOCKS_PER_SEC = 1000000
1875000
3781250
5687500
7593750
9500000
11406250
13312500
15218750
First a misconception: 60 Hz is not 17 operations per second but 60.
Second the period check is reading clock() twice and discarding any time interval for printf() to be called. AFAIK the CLOCKS_PER_SEC is larger on Linux than Windows systems, so there is more chance that you are 'throwing away' clock ticks. Read the clock() once, for example:
#include <stdio.h>
#include <time.h>
int main(void)
{
unsigned long long tickcount = 0;
clock_t baseticks = clock();
while (tickcount < 180) { // for 3 seconds
tickcount++;
clock_t nexttick = (clock_t) (baseticks + tickcount * CLOCKS_PER_SEC / 60);
while(clock() < nexttick) {} // wait
printf("Tick %llu\n", tickcount);
}
return 0;
}
The code works from the total elapsed time, so any intervals that are not an exact number of clock ticks are averaged out (instead of a cumulative rounding-off error).
At some point the value from clock() will overflow/wrap, so a real implementation that runs for any length of time will have to take care of this.
I'm working on a programming assignment and I'm getting strange results.
The idea is to calculate the number of processor ticks and time taken to run the algorithm.
Usually the code runs so quickly that the time taken is 0 sec, but I noticed that the number of processor ticks was 0 at the start and at the finish, resulting in 0 processor ticks taken.
I added a delay using usleep so that the time taken was non-zero, but the processor ticks is still zero and the calculation between the time stamps is still zero.
I've been banging my head on this for several days now and can't get past this problem, any suggestions are extremely welcome.
My code is below:
/* This program takes an input "n". If n is even it divides n by 2
* If n is odd, it multiples n by 3 and adds 1. Each time through the loop
* it iterates a counter.
* It continues until n is 1
*
* This program will compute the time taken to perform the above algorithm
*/
#include <stdio.h>
#include <time.h>
void delay(int);
int main(void) {
int n, i = 0;
time_t start, finish, duration;
clock_t startTicks, finishTicks, diffTicks;
printf("Clocks per sec = %d\n", CLOCKS_PER_SEC);
printf("Enter an integer: ");
scanf("%d", &n); // read value from keyboard
time(&start); // record start time in ticks
startTicks = clock();
printf("Start Clock = %s\n", ctime(&start));
printf("Start Processor Ticks = %d\n", startTicks);
while (n != 1) { // continues until n=1
i++; // increment counter
printf("iterations =%d\t", i); // display counter iterations
if (n % 2) { // if n is odd, n=3n+1
printf("Input n is odd!\t\t");
n = (n * 3) + 1;
printf("Output n = %d\n", n);
delay(1000000);
} else { //if n is even, n=n/2
printf("Input n is even!\t");
n = n / 2;
printf("Output n = %d\n", n);
delay(1000000);
}
}
printf("n=%d\n", n);
time(&finish); // record finish time in ticks
finishTicks = clock();
printf("Stop time = %s\n", ctime(&finish));
printf("Stop Processor Ticks = %d\n", finishTicks);
duration = difftime(finish, start); // compute difference in time
diffTicks = finishTicks - startTicks;
printf("Time elapsed = %2.4f seconds\n", duration);
printf("Processor ticks elapsed = %d\n", diffTicks);
return (n);
}
void delay(int us) {
usleep(us);
}
EDIT: So after researching further, I discovered that usleep() won't affect the program running time, so I wrote a delay function in asm. Now I am getting a value for processor ticks, but I am still getting zero sec taken to run the algorithm.
void delay(int us) {
for (int i = 0; i < us; i++) {
__asm__("nop");
}
}
You can calculate the elapsed time using the below formula.
double timeDiff = (double)(EndTime - StartTime) / CLOCKS_PER_SEC.
Here is the dummy code.
void CalculateTime(clock_t startTime, clock_t endTime)
{
clock_t diffTime = endTime - startTime;
printf("Processor time elapsed = %lf\n", (double)diffTime /CLOCKS_PER_SEC);
}
Hope this helps.
You are trying to time an implementation of Goldbach's Conjecture. I don't see how you can hope to get a meaningful execution time when it contains delays. Another problem is the granularity of clock() results, as shown by the value of CLOCKS_PER_SEC.
It is even more difficult trying to use time() which has a resolution of 1 second.
The way to do it is to compute a large number of values. This prints only 10 of them, to ensure the calculations are not optimised out, but not to distort the calculation time too much.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define SAMPLES 100000
int main(void) {
int i, j, n;
double duration;
clock_t startTicks = clock();
for(j=2; j<SAMPLES; j++) {
n = j; // starting number
i = 0; // iterations
while(n != 1) {
if (n % 2){ // if n is odd, n=3n+1
n = n * 3 + 1;
}
else { // if n is even, n=n/2
n = n / 2;
}
i++;
}
if(j % (SAMPLES/10) == 0) // print 10 results only
printf ("%d had %d iterations\n", j, i);
}
duration = ((double)clock() - startTicks) / CLOCKS_PER_SEC;
printf("\nDuration: %f seconds\n", duration);
return 0;
}
Program output:
10000 had 29 iterations
20000 had 30 iterations
30000 had 178 iterations
40000 had 31 iterations
50000 had 127 iterations
60000 had 179 iterations
70000 had 81 iterations
80000 had 32 iterations
90000 had 164 iterations
Duration: 0.090000 seconds
Can anyone explain why I always get a time of 0 from the code below? I just want a millisecond timer to calculate the delay between sending and receiving data from a socket but no matter what I try, I always get a result of 0...I even tried microseconds just in case my system was executing it in less than 1ms.
printf("#: ");
bzero(buffer,256);
fgets(buffer,255,stdin);
struct timeval start, end;
unsigned long mtime, seconds, useconds;
gettimeofday(&start, NULL);
n = write(clientSocket,buffer,strlen(buffer));
if (n < 0)
{
error("Error: Unable to write to socket!\n");
}
bzero(buffer,256);
n = read(clientSocket,buffer,255);
gettimeofday(&end, NULL);
seconds = end.tv_sec - start.tv_sec;
useconds = end.tv_usec - start.tv_usec;
mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
if (n < 0)
{
error("Error: Unable to read from socket!\n");
}
printf("%s\n",buffer);
printf("Delay: %lu microseconds\n", useconds);
useconds = end.tv_usec - start.tv_usec; is questionable with
unsigned long useconds; since the result may well be negative.
Suggestion:
unsigned long end_us,start_us,elapsed_us;
.
.
.
end_us = end.tv_sec * 1000000 + end.tv_usec;
start_us = start.tv_sec * 1000000 + start.tv_usec;
elapsed_us = end_us - start_us;
printf("elapsed microseconds: %lu\n", elapsed_us);
And:
mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
is to be looked at too: This attempts to convert to milliseconds. It is not clear why you add 0.5.
Suggestion:
elapsed_ms = elapsed_us / 1000;
But defining elapsed_ms as an integer would unnecessarily cut the result to full milliseconds. A float may be considered here.
Assuming your result is in mtime:
mtime is integer and you calculate elapsed time with float numbers so if
((seconds) * 1000 + useconds/1000.0) + 0.5
evaluates to < 1.0 casting to integer will cut it to 0
simply change mtime type to float, or if you can keep micro seconds use
((seconds) * 1000000 + useconds) + 500
I wrote a sample program to understand the time measurement in C.Below is a small self contained example.I have a function do_primes() that calculates prime numbers.In the main() function between timing code I call do_primes() and also sleep for 20 milliseconds.I am measure time using struct timeval (which I understand returns clock time.) and also cpu_time using CLOCKS_PER_SEC.Now as I understand it,this denotes the time for which the CPU was working.
The output of the program is as follows.
Calculated 9592 primes.
elapsed time 2.866976 sec.
cpu time used 2.840000 secs.
As you can see the differnece between the elapsed time and cpu time is
0.026976 seconds OR 26.976 milliseconds.
1) Are my assumptions correct?
2) 6.976 milliseconds is accounted for my the scheduler switch delay?
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
#define MAX_PRIME 100000
void do_primes()
{
unsigned long i, num, primes = 0;
for (num = 1; num <= MAX_PRIME; ++num)
{
for (i = 2; (i <= num) && (num % i != 0); ++i);
if (i == num)
++primes;
}
printf("Calculated %ld primes.\n", primes);
}
int main()
{
struct timeval t1, t2;
double elapsedTime;
clock_t start, end;
double cpu_time_used;
int primes = 0;
int i = 0;
int num = 0;
start = clock();
/* start timer*/
gettimeofday(&t1, NULL);
/*do something */
usleep(20000);
do_primes();
/* stop timer*/
gettimeofday(&t2, NULL);
end = clock();
/*compute and print the elapsed time in millisec*/
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; /* sec to ms*/
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; /* us to ms */
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
printf("elapsed time %f sec. \ncpu time used %f secs.\n",(elapsedTime/1000),cpu_time_used);
return 0;
}
Your understanding is correct.
The additional 6.976ms might not mean anything at all, because it's possible that the clock() function only has a resolution of 10ms.