I can get current time using the timespec_get C11 function. Supposedly I want to retrieve the timespec value of the next milliseconds of the current time, how should I write the get_due_time function?
struct timespec get_due_time(long ms) {
struct timespec now, due;
timespec_get(&now, TIME_UTC);
...
return due;
}
struct timespec get_due_time(long ms)
{
assert(ms >= 0);
struct timespec now, due;
timespec_get(&now, TIME_UTC);
due.tv_sec = now.tv_sec + ms / 1000;
due.tv_nsec = now.tv_nsec + (ms % 1000) * 1000000;
if (due.tv_nsec >= 1000000000)
{
due.tv_nsec -= 1000000000;
due.tv_sec++;
}
return due;
}
Deal with big values of ms by adding whole seconds to the seconds part of the current time (there are 1000 milliseconds in a second). Deal with the sub-second part of the value of ms by multiplying by one million (the number of nanoseconds in a millisecond). Deal with overflow of tv_nsec by subtracting one billion (the number of nanoseconds in a second) from tv_nsec and incrementing the tv_sec part. The arithmetic is safe assuming timespec_get() returns a normalized struct timespec value (so tv_nsec is in the range 0..999,999,999.
You could simply modify now and return it instead of creating and modifying due. That's not a significant problem, though.
You could create and use names like:
enum
{
MILLISECONDS_PER_SECOND = 1000,
NANOSECONDS_PER_SECOND = 1000000000,
NANOSECONDS_PER_MILLISECOND = NANOSECONDS_PER_SECOND / MILLISECONDS_PER_SECOND
};
I'm not sure whether that's worth it.
If you need to handle negative offsets (negative values for ms), you have more work to do, dealing with negative results tv_nsec after adding a negative quantity and decrementing tv_sec if necessary.
Related
double timespec_delta2milliseconds(struct timespec *last, struct timespec *previous)
{
return (last->tv_sec - previous->tv_sec) + (last->tv_nsec - previous->tv_nsec)*pow(10,-3);
}
This function computes the difference (last - previous) and returns the result expressed in milliseconds as a double. I tried a lot of different ways but if I don't do like this i receve in output segmentation fault.
I think that this solution works but it's wrong, someone can help me ?
The timespec structure can handle fractions of a second, and the tv_nsec is the fractions, represented as nanoseconds.
That means getting the difference between two timespec structures isn't as straight-forward as you make it seem in your code.
Here's an example on how to get the difference, returned as a new timesepc structure:
struct timespec diff_timespec(const struct timespec *time1, const struct timespec *time0)
{
struct timespec diff = {
.tv_sec = time1->tv_sec - time0->tv_sec,
.tv_nsec = time1->tv_nsec - time0->tv_nsec
};
if (diff.tv_nsec < 0)
{
diff.tv_nsec += 1000000000;
diff.tv_sec--;
}
return diff;
}
You need two functions: sub_timespec(), which calculates the difference between two time spec values, and timespec_as_milliseconds(), which returns the number of milliseconds in a time spec value as an integer.
enum { NS_PER_SECOND = 1000000000 };
void sub_timespec(struct timespec t1, struct timespec t2, struct timespec *td)
{
td->tv_nsec = t2.tv_nsec - t1.tv_nsec;
td->tv_sec = t2.tv_sec - t1.tv_sec;
if (td->tv_sec > 0 && td->tv_nsec < 0)
{
td->tv_nsec += NS_PER_SECOND;
td->tv_sec--;
}
else if (td->tv_sec < 0 && td->tv_nsec > 0)
{
td->tv_nsec -= NS_PER_SECOND;
td->tv_sec++;
}
}
int64_t timespec_as_milliseconds(struct timespec ts)
{
int64_t rv = ts.tv_sec * 1000 + ts.tv_nsec / 1000000;
return rv;
}
If you want to round the milliseconds, it gets trickier because you have to worry about carries and negative numbers and so on. You should not encounter a timespec value where the tv_sec and tv_nsec values have opposite signs (zeros aren't a problem).
In your code, adding pow(10, -3) mixes floating point arithmetic with integer arithmetic — usually not a good idea.
If you want a double value with up to 3 decimal places of fractional seconds, then you need:
double timespec_to_double_milliseconds(struct timespec ts)
{
double rv = ts.tv_sec + (ts.tv_nsec / 1000000) / 1000.0;
return rv;
}
The first division is (deliberately) integer division; the second gives a floating-point value. Again, rounding has problems with carrying and so on.
Your function then becomes:
double timespec_delta2milliseconds(struct timespec *last, struct timespec *previous)
{
struct timespec delta = sub_timespec(*last, *previous);
return timespec_to_double_milliseconds(delta);
}
You can use an extra value in the function so it is easier to print the value returned in a debugger: double rv = timespec_to_double_milliseconds(delta); return rv;.
The key idea, though, is to do separate tasks in separate functions. Taking the difference between two struct timespec values is one task; converting a struct timespec value to an appropriate double is a separate task. When you can split things into separate tasks, you should.
I often pass struct timespec values by value rather than pointer. The structure size is typically small enough that it is not a stress on the stack or registers. I return them by value too, which simplifies memory management — YMMV.
And, just in case it isn't clear, the tv_sec member of a struct timespec contains an integer number of seconds, and the tv_nsec contains the fractional part of a second expressed as a number of nanoseconds (0 to 999,999,999). It requires care in printing the tv_nsec value; you need a format such as %.9ld to print 9 digits with leading zeros, and the type is long. To print microseconds, divide the tv_nsec value by 1,000 and change 9 to 6; to print milliseconds, divide by 1,000,000 and change 9 to 3. Beware negative values!
I've written a code to ensure each loop of while(1) loop to take specific amount of time (in this example 10000µS which equals to 0.01 seconds). The problem is this code works pretty well at the start but somehow stops after less than a minute. It's like there is a limit of accessing linux time. For now, I am initializing a boolean variable to make this time calculation run once instead infinite. Since performance varies over time, it'd be good to calculate the computation time for each loop. Is there any other way to accomplish this?
void some_function(){
struct timeval tstart,tend;
while (1){
gettimeofday (&tstart, NULL);
...
Some computation
...
gettimeofday (&tend, NULL);
diff = (tend.tv_sec - tstart.tv_sec)*1000000L+(tend.tv_usec - tstart.tv_usec);
usleep(10000-diff);
}
}
from man-page of usleep
#include <unistd.h>
int usleep(useconds_t usec);
usec is unsigned int, now guess what happens when diff is > 10000 in below line
usleep(10000-diff);
Well, the computation you make to get the difference is wrong:
diff = (tend.tv_sec - tstart.tv_sec)*1000000L+(tend.tv_usec - tstart.tv_usec);
You are mixing different integer types, missing that tv_usec can be an unsigned quantity, which your are substracting from another unsigned and can overflow.... after that, you get as result a full second plus a quantity that is around 4.0E09usec. This is some 4000sec. or more than an hour.... aproximately. It is better to check if there's some carry, and in that case, to increment tv_sec, and then substract 10000000 from tv_usec to get a proper positive value.
I don't know the implementation you are using for struct timeval but the most probable is that tv_sec is a time_t (this can be even 64bit) while tv_usec normally is just a unsigned 32 bit value, as it it not going to go further from 1000000.
Let me illustrate... suppose you have elapsed 100ms doing calculations.... and this happens to occur in the middle of a second.... you have
tstart.tv_sec = 123456789; tstart.tv_usec = 123456;
tend.tv_sec = 123456789; tend.tv_usec = 223456;
when you substract, it leads to:
tv_sec = 0; tv_usec = 100000;
but let's suppose you have done your computation while the second changes
tstart.tv_sec = 123456789; tstart.tv_usec = 923456;
tend.tv_sec = 123456790; tend.tv_usec = 23456;
the time difference is again 100msec, but now, when you calculate your expression you get, for the first part, 1000000 (one full second) but, after substracting the second part you get 23456 - 923456 =*=> 4294067296 (*) with the overflow.
so you get to usleep(4295067296) or 4295s. or 1h 11m more.
I think you have not had enough patience to wait for it to finish... but this is something that can be happening to your program, depending on how struct timeval is defined.
A proper way to make carry to work is to reorder the summation to do all the additions first and then the substractions. This forces casts to signed integers when dealing with signed and unsigned together, and prevents a negative overflow in unsigneds.
diff = (tend.tv_sec - tstart.tv_sec) * 1000000 + tstart.tv_usec - tend.tv_usec;
which is parsed as
diff = (((tend.tv_sec - tstart.tv_sec) * 1000000) + tstart.tv_usec) - tend.tv_usec;
I want to ask anyone of you here is familiar with this function as below in the Interbench. I want to port this to windows platform but keep failing. I can only get microsecond accuracy by using timeval instead of timespec. And in the end , there will be error : divide by zero and access violation exceptions
unsigned long get_usecs(struct timeval *myts)
{
if (clock_gettime(myts))
terminal_error("clock_gettime");
return (myts->tv_sec * 1000000 + myts->tv_usec);
}
void burn_loops(unsigned long loops)
{
unsigned long i;
/*
* We need some magic here to prevent the compiler from optimising
* this loop away. Otherwise trying to emulate a fixed cpu load
* with this loop will not work.
*/
for (i = 0; i < loops; i++)
_ReadWriteBarrier();
}
void calibrate_loop()
{
unsigned long long start_time, loops_per_msec, run_time = 0;
unsigned long loops;
struct timeval myts;
loops_per_msec = 100000;
redo:
/* Calibrate to within 1% accuracy */
while (run_time > 1010000 || run_time < 990000) {
loops = loops_per_msec;
start_time = get_usecs(&myts);
burn_loops(loops);
run_time = get_usecs(&myts) - start_time;
loops_per_msec = (1000000 * loops_per_msec / run_time ? run_time : loops_per_msec );
}
/* Rechecking after a pause increases reproducibility */
Sleep(1 * 1000);
loops = loops_per_msec;
start_time = get_usecs(&myts);
burn_loops(loops);
run_time = get_usecs(&myts) - start_time;
/* Tolerate 5% difference on checking */
if (run_time > 1050000 || run_time < 950000)
goto redo;
loops_per_ms = loops_per_msec;
}
The only clock_gettime() function I know is the one specified by POSIX, and that function has a different signature than the one you are using. It does provide nanosecond resolution (though it is unlikely to provide single-nanosecond precision). To the best of my knowledge, however, it is not available on Windows. Microsoft's answer to obtaining nanosecond-scale time differences is to use its proprietary "Query Performance Counter" (QPC) API. Do put that aside for the moment, however, because I suspect clock resolution isn't your real problem.
Supposing that your get_usecs() function successfully retrieves a clock time with microsecond resolution and at least at least (about) millisecond precision, as seems to be the expectation, your code looks a bit peculiar. In particular, this assignment ...
loops_per_msec = (1000000 * loops_per_msec / run_time
? run_time
: loops_per_msec );
... looks quite wrong, as is more apparent when the formatting emphasizes operator precedence, as above (* and / have higher precedence than ?:). It will give you your divide-by-zero if you don't get a measurable positive run time, or otherwise it will always give you either the same loops_per_msec value you started with or else run_time, the latter of which doesn't even have the right units.
I suspect the intent was something more like this ...
loops_per_msec = ((1000000 * loops_per_msec)
/ (run_time ? run_time : loops_per_msec));
..., but that still has a problem: if 1000000 loops is not sufficient to consume at least one microsecond (as measured) then you will fall into an infinite loop, with loops_per_msec repeatedly set to 1000000.
This would be less susceptible to that particular problem ...
loops_per_msec = ((1000000 * loops_per_msec) / (run_time ? run_time : 1));
... and it makes more sense to me, too, because if the measured run time is 0 microseconds, then 1 microsecond is a better non-zero approximation to that than any other possible value. Do note that this will scale up your loops_per_msec quite rapidly (one million-fold) when the measured run time is zero microseconds. You can't do that many times without overflowing, even if unsigned long long turns out to have 128 bits, and if you get an overflow then you will go into an infinite loop. On the other hand, if that overflow happens then it indicates an absurdly large correct value for the loops_per_msec you are trying to estimate.
And that leads me to my conclusion: I suspect your real problem is that your timing calculations are wrong or invalid, either because get_usecs() isn't working correctly or because the body of burn_loops() is being optimized away (despite your effort to avoid that). You don't need sub-microsecond precision for your time measurements. In fact, you don't even really need better than millisecond precision, as long as your burn_loop() actually does work proportional to the value of its argument.
I'm using a time_t variable in C (openMP enviroment) to keep cpu execution time...I define a float value sum_tot_time to sum time for all cpu's...I mean sum_tot_time is the sum of cpu's time_t values. The problem is that printing the value sum_tot_time it appear as an integer or long, by the way without its decimal part!
I tried in these ways:
to printf sum_tot_time as a double being a double value
to printf sum_tot_time as float being a float value
to printf sum_tot_time as double being a time_t value
to printf sum_tot_time as float being a time_t value
The resolution of time_t is at most one second on most platforms. That is, on most platforms, time_t will be an integer (32- or 64-bit) value counting the number of seconds elapsed since midnight of Jan 1st 1970 (UTC), and can only achieve one-second resolution.
Therefore, a sum of time_t values will also only exhibit one-second resolution (no decimal part, even after converting to double.)
The above having been said, what native or OpenMP call are you using to obtain the time_t values that you are attempting to accumulate?
If using either the native *nix getrusage() call to fill out an rusage structure (provided your platform supports it) with user/kernel times, or if using gettimeofday() to get wall time, then use both the tv_sec and tv_usec fields of struct timeval to generate a double value (of millisecond-or-better resolution, typically), and use that instead of time_t in your calculations:
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
Correspondingly, you can use GetThreadTimes/GetProcessTimes for user/kernel times or _ftime for wall time on Windows platforms, then combine FILETIME::dwHighDateTime/dwLowDateTime.
I'm not sure if you have access to standard *nix system calls ( or if this is relevant to specifically to what you're doing ), but if you do you can use the timeval struct and gettimeofday. For example, to print out a timestamp with six decimal places of precision which produces a tcpdump style time stamp ( courtesy of Steven UNP )
#include "unp.h"
#include <time.h>
char *
gf_time(void)
{
struct timeval tv;
time_t t;
static char str[30];
char *ptr;
if (gettimeofday(&tv, NULL) < 0)
err_sys("gettimeofday error");
t = tv.tv_sec; /* POSIX says tv.tv_sec is time_t; some BSDs don't agree. */
ptr = ctime(&t);
strcpy(str, &ptr[11]);
/* Fri Sep 13 00:00:00 1986\n\0 */
/* 0123456789012345678901234 5 */
snprintf(str+8, sizeof(str)-8, ".%06ld", tv.tv_usec);
return(str);
}
So I get the time at the beginning of the code, run it, and then get the time.
struct timeval begin, end;
gettimeofday(&begin, NULL);
//code to time
gettimeofday(&end, NULL);
//get the total number of ms that the code took:
unsigned int t = end.tv_usec - begin.tv_usec;
Now I want to print it out in the form "**code took 0.007 seconds to run" or something similar.
So two problems:
1) t seems to contain a value of the order 6000, and I KNOW the code didn't take 6 seconds to run.
2) How can I convert t to a double, given that it's an unsigned int? Or is there an easier way to print the output the way I wanted to?
timeval contains two fields, the seconds part and the microseconds part.
tv_usec (u meaning the greek letter mu, which stands for micro) is the microseconds. Thus when you get 6000, that's 6000 microseconds elapsed.
tv_sec contains the seconds part.
To get the value you want as a double use this code:
double elapsed = (end.tv_sec - begin.tv_sec) +
((end.tv_usec - begin.tv_usec)/1000000.0);
Make sure you include the tv_sec part in your calculations. gettimeofday returns the current time, so when the seconds increments the microsecond will go back to 0, if this happens while your code is running and you don't use tv_sec in your calculation, you will get a negative number.
1) That's because usec is not 6000 milliseconds, it's 6000 microseconds (6 milliseconds or 6 one-thousandths of a second).
2) Try this: (double)t / 1000000 That will convert t to a double, and then divide it by one million to find the number of seconds instead of the number of microseconds.