Windows Driver Timestamp function - c

I am modifying an existing Windows Kernel device driver and in there I need to capture a timestamp. I was intending to use time.h library and call the clock() function to get that, however under windows visual studio, the linking is failing. So I took it as a means that I need to work within the driver's libraries.
I found the following function, KeInitializeTimer, and KeSetTimerEx but these are used if I plan to set up a timer and wake up on it. What I really need is something that will give me a timestamp.
Any ideas?

I am updating my question with an answer for others to benefit from my findings.
To get a timestamp, you can use KeQueryTickCount(). This routine will give you the count of interval interrupts that occurred since the system was booted. However, if you need to find out since the last timestamp you captured, an X amount of time has passed you need to also query your system to determine the time it takes for each interval clock interrupt.
ULONG KeQueryTimeIncrement() give you the number of 100-nanosecond units.
Example:
PLARGE_INTEGER timeStamp;
KeQueryTickCount(&timeStamp);
Please note that PLARGE_INTEGER is defined as such:
#if defined(MIDL_PASS)
typedef struct _LARGE_INTEGER {
#else // MIDL_PASS
typedef union _LARGE_INTEGER {
struct {
ULONG LowPart;
LONG HighPart;
} DUMMYSTRUCTNAME;
struct {
ULONG LowPart;
LONG HighPart;
} u;
#endif //MIDL_PASS
LONGLONG QuadPart;
} LARGE_INTEGER;
So lets say, you want to see if 30 seconds passed since you last took a timestamp, you can do the following:
ULONG tickIncrement, ticks;
LARGE_INTEGER waitTillTimeStamp;
tickIncrement = KeQueryTimeIncrement();
// 1sec is 1,000,000,000 nano sec, however, since KeQueryTimeIncrement is in
// 100ns increments, divide that and your constant is 10,000,000
ticks = ((30 * 10,000,000) / tickIncrement);
KeQueryTickCount(&waitTillTimeStamp);
waitTillTimeStamp.QuadPart += ticks;
<.....Some code and time passage....>
KeQueryTickCount(&currTimeStamp);
if (waitTillTimeStamp.QuadPart < currTimeStamp.QuadPart) {
<...Do whatever...>
}
Another example to help you understand this, what if you want to translate the timestamp you got into a time value such as milliseconds.
LARGE_INTEGER mSec, currTimeStamp;
ULONG timeIncrement;
timeIncrement = KeQueryTimeIncrement();
KeQueryTickCount(&currTimeStamp);
// 1 millisecond is 1,000,000 nano seconds, but remember divide by 100 to account for
// KeQueryTickCount granularity.
mSec.QuadPart = (currTimeStamp.QuadPart * timeIncrement) / 10000;
Remember this example is for demonstration purposes, mSec is not the current time in milliseconds. Based on the APIs used above, it is merely the number of milliseconds that have elapsed since the system was started.
You can also use GetTickCount(), but this returns a DWORD and thus will only be able to give you the number of milliseonds since the system was started for up to 49.7 days.

I know this is a 10 years old question but... better later than never. I disagree with the OP's answer.
Proper solution:
// The KeQuerySystemTime routine obtains the current system time.
LARGE_INTEGER SystemTime;
KeQuerySystemTime(&SystemTime);
// The ExSystemTimeToLocalTime routine converts a GMT system time value to the local system time for the current time zone.
LARGE_INTEGER LocalTime;
ExSystemTimeToLocalTime(&SystemTime, &LocalTime);
// The RtlTimeToTimeFields routine converts system time into a TIME_FIELDS structure.
TIME_FIELDS TimeFields;
RtlTimeToTimeFields(&LocalTime, &TimeFields);

Related

How to get milliseconds passed since unix epoch in C without multiplying seconds by 1000?

The only thing I know is time(NULL), but it return seconds since 1970.
It's fine to me to use WinApi functions if C doesn't have needed function.
I even found GetLocalTime WinApi function, but it return current date-time as struct...
The reason I don't want to multiply seconds by 1000 is because I inject my code into some program, and my code used to capture certain events from two sources, and I need to know their exact time, because if both events happened at same amount of seconds since 1970, but different amount of milliseconds, then for me they seems like happen in same time, and it's hard to determine what happened first (I doing events sorting later...)
In Unix, you have (probably you'll get some of these apis also working in windows) gettimeofday(2), which is BSD implementation of time, it is based on struct timeval which is a struct that has two fields, tv_sec (time in seconds since epoch, as given by time(2)) and tv_usec (time in µsec, as an integer, between 0 and 999999)
This will suffice for your requirements, but today, it is common to use Posix' calls clock_gettime(2), which allow you to select the type of time you want to get (wall clock time, cpu time, etc.) clock_gettime(2) uses a similar struct timespec (this time it has tv_sec and tv_nsec ---nanosecond--- resolution)
No clock is warranted to get nanosecond resolution (but someones do), but at least you get up to the µsec level, which is more than you want)
In order to be able to give the time as milliseconds, you have just to multiply the tv_sec by 1000, and then add the tv_usec(if using gettimeofday()) value divided by 1000. Or if you prefer to use clock_gettime(), you will add the tv_sec field multiplied by 1000, and then add the tv_nsec field divided by 1000000.
If you just need to compare which timestamp is earlier than other, you can just compare both tv_sec fields, and if they happen to be equal, then compare the tv_usec fields. Every unix I know about (except SCO UNIX) do implement gettimeofday() to µsec resolution.
This is WinApi way to determine milliseconds passed since 1970.
I made it since no one provide answer with native C way (maybe there is no native way at all)...
SYSTEMTIME unix_epoch;
unix_epoch.wYear = 1970;
unix_epoch.wMonth = 1;
unix_epoch.wDay = 1;
unix_epoch.wDayOfWeek = 4;
unix_epoch.wHour = 0;
unix_epoch.wMilliseconds = 0;
unix_epoch.wMinute = 0;
unix_epoch.wSecond = 0;
FILETIME curr_time_as_filetime;
GetSystemTimeAsFileTime(&curr_time_as_filetime);
FILETIME unix_epoch_as_filetime;
SystemTimeToFileTime(&unix_epoch, &unix_epoch_as_filetime);
ULARGE_INTEGER curr_time_as_uint64;
ULARGE_INTEGER unix_epoch_as_uint64;
curr_time_as_uint64.HighPart = curr_time_as_filetime.dwHighDateTime;
curr_time_as_uint64.LowPart = curr_time_as_filetime.dwLowDateTime;
unix_epoch_as_uint64.HighPart = unix_epoch_as_filetime.dwHighDateTime;
unix_epoch_as_uint64.LowPart = unix_epoch_as_filetime.dwLowDateTime;
ULARGE_INTEGER milliseconds_since_1970;
milliseconds_since_1970.QuadPart = (curr_time_as_uint64.QuadPart - unix_epoch_as_uint64.QuadPart) / 10000;
An alternative would be to use clock() in combination with time(NULL) at start of program
start_time = time(NULL)
....
clock() + start_time * CLOCKS_PER_SEC
That would give you a better estimate and maybe exact enough for your needs?

an efficient way to detect when the system's hour changed from xx:59 to xy:00

I have an application on Linux that needs to change some parameters each hour, e.g. at 11:00, 12:00, etc. and the system's date can be changed by the user anytime.
Is there any signal, posix function that would provides me when a hour changes from xx:59 to xx+1:00?
Normally, I use localtime(3) to fetch the current time each seconds then compare if the minute part is equal to 0. however, it does not look a good way to do it, in order to detect a change, I need to call the same function each second for an hour. Especially I run the code on an embedded board that would be good to use less resources.
Here is an example code how I do it:
static char *fetch_time() { // I use this fcn for some other purpose to fetch the time info
char *p;
time_t rawtime;
struct tm * timeinfo;
char buffer[13];
time(&rawtime);
timeinfo = localtime(&rawtime);
strftime (buffer,13,"%04Y%02m%02d%02k%02M",timeinfo);
p = (char *)malloc(sizeof(buffer));
strcpy(p, buffer);
return p;
}
static int hour_change_check(){
char *p;
p = fetch_time();
char current_minute[3] = {'\0'};
current_minute[0] = p[10];
current_minute[1] = p[11];
int current_minute_as_int = atoi(current_minute);
if (current_minute_as_int == 0){
printf("current_min: %d\n",current_minute_as_int);
free(p);
return 1;
}
free(p);
return 0;
}
int main(void){
while(1){
int x = hour_change_check();
printf("x:%d\n",x);
sleep(1);
}
return 0;
}
There is no such signal, but traditionally the method of waiting until some target time is to compute how long it is between "now" and "then", and then call sleep():
now = time(NULL);
when = (some calculation);
if (when > now)
sleep(when - now);
If you need to be very precise about the transition from, e.g., 3:59:59 to 4:00:00, you may want to sleep for a slightly shorter time in case of time adjustments due to leap seconds. (If you are running in a portable device in which time zones can change, you also need to worry about picking up the new location, and if it runs on a half-hour offset, redo all computations. There's even Solar Time in Saudi Arabia....)
Edit: per the suggestion from R.., if clock_nanosleep() is available, calculate a timespec value for the absolute wakeup time and call it with the TIMER_ABSTIME flag. See http://pubs.opengroup.org/onlinepubs/009695399/functions/clock_nanosleep.html for the definition for clock_nanosleep(). However, if time is allowed to step backwards (e.g., localtime with zone shifts), you may still have to do some maintenance checking.
Have you actually measured the overhead used in your solution of polling the time once per second (or even two given some of your other comments)?
The number of instructions that are invoked is minimal AND you do not have any looping. So at worse maybe the cpu uses 100 micro-seconds (0.1 ms, or 0.0001 s) time. This estimate is very dependent on the processor used in your embedded system and its clock speed, but the idea is that maybe the polling logic uses 1/1000 of the total time available.
Also, you could optimize your hour_change_check code to do all of the time calcs and not call another function that issues malloc which has to be immediately freed! Also, if this is an embedded *nix system, can you still run this polling logic in its own thread so that when it issues sleep() it will not interfere or delay other units of work.
Hence, measure the problem and see if it is a significant problem. The polling's performance must be balanced against the requirement that when a user changes the time then the hour change MUST be detected. That is, I think polling every second will catch the hour rollover even if the user changes the time, but is the overhead worth it. Well, how much, exactly, overhead is there?

Get a timestamp in C in microseconds?

How do I get a microseconds timestamp in C?
I'm trying to do:
struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_usec;
But this returns some nonsense value that if I get two timestamps, the second one can be smaller or bigger than the first (second one should always be bigger). Would it be possible to convert the magic integer returned by gettimeofday to a normal number which can actually be worked with?
You need to add in the seconds, too:
unsigned long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec;
Note that this will only last for about 232/106 =~ 4295 seconds, or roughly 71 minutes though (on a typical 32-bit system).
You have two choices for getting a microsecond timestamp. The first (and best) choice, is to use the timeval type directly:
struct timeval GetTimeStamp() {
struct timeval tv;
gettimeofday(&tv,NULL);
return tv;
}
The second, and for me less desirable, choice is to build a uint64_t out of a timeval:
uint64_t GetTimeStamp() {
struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;
}
Get a timestamp in C in microseconds?
Here is a generic answer pertaining to the title of this question:
How to get a simple timestamp in C
in milliseconds (ms) with function millis(),
microseconds (us) with micros(), and
nanoseconds (ns) with nanos()
Quick summary: if you're in a hurry and using a Linux or POSIX system, jump straight down to the section titled "millis(), micros(), and nanos()", below, and just use those functions. If you're using C11 not on a Linux or POSIX system, you'll need to replace clock_gettime() in those functions with timespec_get().
2 main timestamp functions in C:
C11: timespec_get() is part of the C11 or later standard, but doesn't allow choosing the type of clock to use. It also works in C++17. See documentation for std::timespec_get() here. However, for C++11 and later, I prefer to use a different approach where I can specify the resolution and type of the clock instead, as I demonstrate in my answer here: Getting an accurate execution time in C++ (micro seconds).
The C11 timespec_get() solution is a bit more limited than the C++ solution in that you cannot specify the clock resolution nor the monotonicity (a "monotonic" clock is defined as a clock that only counts forwards and can never go or jump backwards--ex: for time corrections). When measuring time differences, monotonic clocks are desired to ensure you never count a clock correction jump as part of your "measured" time.
The resolution of the timestamp values returned by timespec_get(), therefore, since we can't specify the clock to use, may be dependent on your hardware architecture, operating system, and compiler. An approximation of the resolution of this function can be obtained by rapidly taking 1000 or so measurements in quick-succession, then finding the smallest difference between any two subsequent measurements. Your clock's actual resolution is guaranteed to be equal to or smaller than that smallest difference.
I demonstrate this in the get_estimated_resolution() function of my timinglib.c timing library intended for Linux.
Linux and POSIX: Even better than timespec_get() in C is the Linux and POSIX function clock_gettime() function, which also works fine in C++ on Linux or POSIX systems. clock_gettime() does allow you to choose the desired clock. You can read the specified clock resolution with clock_getres(), although that doesn't give you your hardware's true clock resolution either. Rather, it gives you the units of the tv_nsec member of the struct timespec. Use my get_estimated_resolution() function described just above and in my timinglib.c/.h files to obtain an estimate of the resolution.
So, if you are using C on a Linux or POSIX system, I highly recommend you use clock_gettime() over timespec_get().
C11's timespec_get() (ok) and Linux/POSIX's clock_gettime() (better):
Here is how to use both functions:
C11's timespec_get()
https://en.cppreference.com/w/c/chrono/timespec_get
Works in C, but doesn't allow you to choose the clock to use.
Full example, with error checking:
#include <stdint.h> // `UINT64_MAX`
#include <stdio.h> // `printf()`
#include <time.h> // `timespec_get()`
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)
uint64_t nanoseconds;
struct timespec ts;
int return_code = timespec_get(&ts, TIME_UTC);
if (return_code == 0)
{
printf("Failed to obtain timestamp.\n");
nanoseconds = UINT64_MAX; // use this to indicate error
}
else
{
// `ts` now contains your timestamp in seconds and nanoseconds! To
// convert the whole struct to nanoseconds, do this:
nanoseconds = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
}
Linux/POSIX's clock_gettime() -- USE THIS ONE WHENEVER POSSIBLE!
https://man7.org/linux/man-pages/man3/clock_gettime.3.html (best reference for this function) and:
https://linux.die.net/man/3/clock_gettime
Works in C on Linux or POSIX systems, and allows you to choose the clock to use!
I choose the CLOCK_MONOTONIC_RAW clock, which is best for obtaining timestamps used to time things on your system.
See definitions for all of the clock types here, too, such as CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, etc: https://man7.org/linux/man-pages/man3/clock_gettime.3.html
Another popular clock to use is CLOCK_REALTIME. Do NOT be confused, however! "Realtime" does NOT mean that it is a good clock to use for "realtime" operating systems, or precise timing. Rather, it means it is a clock which will be adjusted to the "real time", or actual "world time", periodically, if the clock drifts. Again, do NOT use this clock for precise timing usages, as it can be adjusted forwards or backwards at any time by the system, outside of your control.
Full example, with error checking:
// This line **must** come **before** including <time.h> in order to
// bring in the POSIX functions such as `clock_gettime() from <time.h>`!
#define _POSIX_C_SOURCE 199309L
#include <errno.h> // `errno`
#include <stdint.h> // `UINT64_MAX`
#include <stdio.h> // `printf()`
#include <string.h> // `strerror(errno)`
#include <time.h> // `clock_gettime()` and `timespec_get()`
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)
uint64_t nanoseconds;
struct timespec ts;
int return_code = clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
if (return_code == -1)
{
printf("Failed to obtain timestamp. errno = %i: %s\n", errno,
strerror(errno));
nanoseconds = UINT64_MAX; // use this to indicate error
}
else
{
// `ts` now contains your timestamp in seconds and nanoseconds! To
// convert the whole struct to nanoseconds, do this:
nanoseconds = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
}
millis(), micros(), and nanos():
Anyway, here are my millis(), micros(), and nanos() functions I use in C for simple timestamps and code speed profiling.
I am using the Linux/POSIX clock_gettime() function below. If you are using C11 or later on a system which does not have clock_gettime() available, simply replace all usages of clock_gettime(CLOCK_MONOTONIC_RAW, &ts) below with timespec_get(&ts, TIME_UTC) instead.
Get the latest version of my code here from my eRCaGuy_hello_world repo here:
timinglib.h
timinglib.c
// This line **must** come **before** including <time.h> in order to
// bring in the POSIX functions such as `clock_gettime() from <time.h>`!
#define _POSIX_C_SOURCE 199309L
#include <time.h>
/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)
/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns) ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns) ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns) ((ns)/1000)
/// Get a time stamp in milliseconds.
uint64_t millis()
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
return ms;
}
/// Get a time stamp in microseconds.
uint64_t micros()
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
return us;
}
/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
return ns;
}
// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.
Timestamp Resolution:
On my x86-64 Linux Ubuntu 18.04 system with the gcc compiler, clock_getres() returns a resolution of 1 ns.
For both clock_gettime() and timespec_get(), I have also done empirical testing where I take 1000 timestamps rapidly, as fast as possible (see the get_estimated_resolution() function of my timinglib.c timing library), and look to see what the minimum gap is between timestamp samples. This reveals a range of ~14~26 ns on my system when using timespec_get(&ts, TIME_UTC) and clock_gettime(CLOCK_MONOTONIC, &ts), and ~75~130 ns for clock_gettime(CLOCK_MONOTONIC_RAW, &ts). This can be considered the rough "practical resolution" of these functions. See that test code in timinglib_get_resolution.c, and see the definition for my get_estimated_resolution() and get_specified_resolution() functions (which are used by that test code) in timinglib.c.
These results are hardware-specific, and your results on your hardware may vary.
References:
The cppreference.com documentation sources I link to above.
This answer here by #Ciro Santilli新疆棉花
[my answer] my answer about usleep() and nanosleep() - it reminded me I needed to do #define _POSIX_C_SOURCE 199309L in order to bring in the clock_gettime() POSIX function from <time.h>!
https://linux.die.net/man/3/clock_gettime
https://man7.org/linux/man-pages/man3/clock_gettime.3.html
Mentions the requirement for:
_POSIX_C_SOURCE >= 199309L
See definitions for all of the clock types here, too, such as CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, etc.
See also:
My shorter and less-through answer here, which applies only to ANSI/ISO C11 or later: How to measure time in milliseconds using ANSI C?
My 3 sets of timestamp functions (cross-linked to each other):
For C timestamps, see my answer here: Get a timestamp in C in microseconds?
For C++ high-resolution timestamps, see my answer here: Here is how to get simple C-like millisecond, microsecond, and nanosecond timestamps in C++
For Python high-resolution timestamps, see my answer here: How can I get millisecond and microsecond-resolution timestamps in Python?
https://en.cppreference.com/w/c/chrono/clock
POSIX clock_gettime(): https://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
clock_gettime() on Linux: https://linux.die.net/man/3/clock_gettime
Note: for C11 and later, you can use timespec_get(), as I have done above, instead of POSIX clock_gettime(). https://en.cppreference.com/w/c/chrono/clock says:
use timespec_get in C11
But, using clock_gettime() instead allows you to choose a desired clock ID for the type of clock you want! See also here: ***** https://people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html
Todo:
✓ DONE AS OF 3 Apr. 2022: Since timespec_getres() isn't supported until C23, update my examples to include one which uses the POSIX clock_gettime() and clock_getres() functions on Linux. I'd like to know precisely how good the clock resolution is that I can expect on a given system. Is it ms-resolution, us-resolution, ns-resolution, something else? For reference, see:
https://linux.die.net/man/3/clock_gettime
https://people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html
https://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
Answer: clock_getres() returns 1 ns, but the actual resolution is about 14~27 ns, according to my get_estimated_resolution() function here: https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib.c. See the results here:
https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib_get_resolution.c#L46-L77
Activate the Linux SCHED_RR soft real-time round-robin scheduler for the best and most-consistent timing possible. See my answer here regarding clock_nanosleep(): How to configure the Linux SCHED_RR soft real-time round-robin scheduler so that clock_nanosleep() can have improved resolution of ~4 us down from ~ 55 us.
struct timeval contains two components, the second and the microsecond. A timestamp with microsecond precision is represented as seconds since the epoch stored in the tv_sec field and the fractional microseconds in tv_usec. Thus you cannot just ignore tv_sec and expect sensible results.
If you use Linux or *BSD, you can use timersub() to subtract two struct timeval values, which might be what you want.
timespec_get from C11
Returns with precision of up to nanoseconds, rounded to the resolution of the implementation.
#include <time.h>
struct timespec ts;
timespec_get(&ts, TIME_UTC);
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
See more details in my other answer here: How to measure time in milliseconds using ANSI C?
But this returns some nonsense value
that if I get two timestamps, the
second one can be smaller or bigger
than the first (second one should
always be bigger).
What makes you think that? The value is probably OK. It’s the same situation as with seconds and minutes – when you measure time in minutes and seconds, the number of seconds rolls over to zero when it gets to sixty.
To convert the returned value into a “linear” number you could multiply the number of seconds and add the microseconds. But if I count correctly, one year is about 1e6*60*60*24*360 μsec and that means you’ll need more than 32 bits to store the result:
$ perl -E '$_=1e6*60*60*24*360; say int log($_)/log(2)'
44
That’s probably one of the reasons to split the original returned value into two pieces.
use an unsigned long long (i.e. a 64 bit unit) to represent the system time:
typedef unsigned long long u64;
u64 u64useconds;
struct timeval tv;
gettimeofday(&tv,NULL);
u64useconds = (1000000*tv.tv_sec) + tv.tv_usec;
Better late than never! This little programme can be used as the quickest way to get time stamp in microseconds and calculate the time of a process in microseconds:
#include <sys/time.h>
#include <stdio.h>
#include <time.h>
struct timeval GetTimeStamp()
{
struct timeval tv;
gettimeofday(&tv,NULL);
return tv;
}
int main()
{
struct timeval tv= GetTimeStamp(); // Calculate time
signed long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec; // Store time in microseconds
getchar(); // Replace this line with the process that you need to time
printf("Elapsed time: %ld microsecons\n", (1000000 * GetTimeStamp().tv_sec + GetTimeStamp().tv_usec) - time_in_micros);
}
You can replace getchar() with a function/process. Finally, instead of printing the difference you can store it in a signed long. The programme works fine in Windows 10.
First we need to know on the range of microseconds i.e. 000_000 to 999_999 (1000000 microseconds is equal to 1second). tv.tv_usec will return value from 0 to 999999 not 000000 to 999999 so when using it with seconds we might get 2.1seconds instead of 2.000001 seconds because when only talking about tv_usec 000001 is essentially 1.
Its better if you insert
if(tv.tv_usec<10)
{
printf("00000");
}
else if(tv.tv_usec<100&&tv.tv_usec>9)// i.e. 2digits
{
printf("0000");
}
and so on...

C - gettimeofday for computing time?

do you know how to use gettimeofday for measuring computing time? I can measure one time by this code:
char buffer[30];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime));
printf("%s%ld\n",buffer,tv.tv_usec);
This one is made before computing, second one after. But do you know how to subtracts it?
I need result in miliseconds
To subtract timevals:
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;
This is assuming you'll be working with intervals shorter than ~2000 seconds, at which point the arithmetic may overflow depending on the types used. If you need to work with longer intervals just change the last line to:
long long elapsed = (t1.tv_sec-t0.tv_sec)*1000000LL + t1.tv_usec-t0.tv_usec;
The answer offered by #Daniel Kamil Kozar is the correct answer - gettimeofday actually should not be used to measure the elapsed time. Use clock_gettime(CLOCK_MONOTONIC) instead.
Man Pages say - The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2).
The Opengroup says - Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function.
Everyone seems to love gettimeofday until they run into a case where it does not work or is not there (VxWorks) ... clock_gettime is fantastically awesome and portable.
<<
If you want to measure code efficiency, or in any other way measure time intervals, the following will be easier:
#include <time.h>
int main()
{
clock_t start = clock();
//... do work here
clock_t end = clock();
double time_elapsed_in_seconds = (end - start)/(double)CLOCKS_PER_SEC;
return 0;
}
hth
No. gettimeofday should NEVER be used to measure time.
This is causing bugs all over the place. Please don't add more bugs.
Your curtime variable holds the number of seconds since the epoch. If you get one before and one after, the later one minus the earlier one is the elapsed time in seconds. You can subtract time_t values just fine.

How to measure time in milliseconds using ANSI C?

Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.
There is no ANSI C function that provides better than 1 second time resolution but the POSIX function gettimeofday provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.
You can use this function like this:
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
// Some code you want to time, for example:
sleep(1);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
printf("Time elapsed: %ld.%06ld\n", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);
This returns Time elapsed: 1.000870 on my machine.
#include <time.h>
clock_t uptime = clock() / (CLOCKS_PER_SEC / 1000);
I always use the clock_gettime() function, returning time from the CLOCK_MONOTONIC clock. The time returned is the amount of time, in seconds and nanoseconds, since some unspecified point in the past, such as system startup of the epoch.
#include <stdio.h>
#include <stdint.h>
#include <time.h>
int64_t timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}
int main(int argc, char **argv)
{
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC, &start);
// Some code I am interested in measuring
clock_gettime(CLOCK_MONOTONIC, &end);
uint64_t timeElapsed = timespecDiff(&end, &start);
}
Implementing a portable solution
As it was already mentioned here that there is no proper ANSI solution with sufficient precision for the time measurement problem, I want to write about the ways how to get a portable and, if possible, a high-resolution time measurement solution.
Monotonic clock vs. time stamps
Generally speaking there are two ways of time measurement:
monotonic clock;
current (date)time stamp.
The first one uses a monotonic clock counter (sometimes it is called a tick counter) which counts ticks with a predefined frequency, so if you have a ticks value and the frequency is known, you can easily convert ticks to elapsed time. It is actually not guaranteed that a monotonic clock reflects the current system time in any way, it may also count ticks since a system startup. But it guarantees that a clock is always run up in an increasing fashion regardless of the system state. Usually the frequency is bound to a hardware high-resolution source, that's why it provides a high accuracy (depends on hardware, but most of the modern hardware has no problems with high-resolution clock sources).
The second way provides a (date)time value based on the current system clock value. It may also have a high resolution, but it has one major drawback: this kind of time value can be affected by different system time adjustments, i.e. time zone change, daylight saving time (DST) change, NTP server update, system hibernation and so on. In some circumstances you can get a negative elapsed time value which can lead to an undefined behavior. Actually this kind of time source is less reliable than the first one.
So the first rule in time interval measuring is to use a monotonic clock if possible. It usually has a high precision, and it is reliable by design.
Fallback strategy
When implementing a portable solution it is worth to consider a fallback strategy: use a monotonic clock if available and fallback to time stamps approach if there is no monotonic clock in the system.
Windows
There is a great article called Acquiring high-resolution time stamps on MSDN about time measurement on Windows which describes all the details you may need to know about software and hardware support. To acquire a high precision time stamp on Windows you should:
query a timer frequency (ticks per second) with QueryPerformanceFrequency:
LARGE_INTEGER tcounter;
LARGE_INTEGER freq;
if (QueryPerformanceFrequency (&tcounter) != 0)
freq = tcounter.QuadPart;
The timer frequency is fixed on the system boot so you need to get it only once.
query the current ticks value with QueryPerformanceCounter:
LARGE_INTEGER tcounter;
LARGE_INTEGER tick_value;
if (QueryPerformanceCounter (&tcounter) != 0)
tick_value = tcounter.QuadPart;
scale the ticks to elapsed time, i.e. to microseconds:
LARGE_INTEGER usecs = (tick_value - prev_tick_value) / (freq / 1000000);
According to Microsoft you should not have any problems with this approach on Windows XP and later versions in most cases. But you can also use two fallback solutions on Windows:
GetTickCount provides the number of milliseconds that have elapsed since the system was started. It wraps every 49.7 days, so be careful in measuring longer intervals.
GetTickCount64 is a 64-bit version of GetTickCount, but it is available starting from Windows Vista and above.
OS X (macOS)
OS X (macOS) has its own Mach absolute time units which represent a monotonic clock. The best way to start is the Apple's article Technical Q&A QA1398: Mach Absolute Time Units which describes (with the code examples) how to use Mach-specific API to get monotonic ticks. There is also a local question about it called clock_gettime alternative in Mac OS X which at the end may leave you a bit confused what to do with the possible value overflow because the counter frequency is used in the form of numerator and denominator. So, a short example how to get elapsed time:
get the clock frequency numerator and denominator:
#include <mach/mach_time.h>
#include <stdint.h>
static uint64_t freq_num = 0;
static uint64_t freq_denom = 0;
void init_clock_frequency ()
{
mach_timebase_info_data_t tb;
if (mach_timebase_info (&tb) == KERN_SUCCESS && tb.denom != 0) {
freq_num = (uint64_t) tb.numer;
freq_denom = (uint64_t) tb.denom;
}
}
You need to do that only once.
query the current tick value with mach_absolute_time:
uint64_t tick_value = mach_absolute_time ();
scale the ticks to elapsed time, i.e. to microseconds, using previously queried numerator and denominator:
uint64_t value_diff = tick_value - prev_tick_value;
/* To prevent overflow */
value_diff /= 1000;
value_diff *= freq_num;
value_diff /= freq_denom;
The main idea to prevent an overflow is to scale down the ticks to desired accuracy before using the numerator and denominator. As the initial timer resolution is in nanoseconds, we divide it by 1000 to get microseconds. You can find the same approach used in Chromium's time_mac.c. If you really need a nanosecond accuracy consider reading the How can I use mach_absolute_time without overflowing?.
Linux and UNIX
The clock_gettime call is your best way on any POSIX-friendly system. It can query time from different clock sources, and the one we need is CLOCK_MONOTONIC. Not all systems which have clock_gettime support CLOCK_MONOTONIC, so the first thing you need to do is to check its availability:
if _POSIX_MONOTONIC_CLOCK is defined to a value >= 0 it means that CLOCK_MONOTONIC is avaiable;
if _POSIX_MONOTONIC_CLOCK is defined to 0 it means that you should additionally check if it works at runtime, I suggest to use sysconf:
#include <unistd.h>
#ifdef _SC_MONOTONIC_CLOCK
if (sysconf (_SC_MONOTONIC_CLOCK) > 0) {
/* A monotonic clock presents */
}
#endif
otherwise a monotonic clock is not supported and you should use a fallback strategy (see below).
Usage of clock_gettime is pretty straight forward:
get the time value:
#include <time.h>
#include <sys/time.h>
#include <stdint.h>
uint64_t get_posix_clock_time ()
{
struct timespec ts;
if (clock_gettime (CLOCK_MONOTONIC, &ts) == 0)
return (uint64_t) (ts.tv_sec * 1000000 + ts.tv_nsec / 1000);
else
return 0;
}
I've scaled down the time to microseconds here.
calculate the difference with the previous time value received the same way:
uint64_t prev_time_value, time_value;
uint64_t time_diff;
/* Initial time */
prev_time_value = get_posix_clock_time ();
/* Do some work here */
/* Final time */
time_value = get_posix_clock_time ();
/* Time difference */
time_diff = time_value - prev_time_value;
The best fallback strategy is to use the gettimeofday call: it is not a monotonic, but it provides quite a good resolution. The idea is the same as with clock_gettime, but to get a time value you should:
#include <time.h>
#include <sys/time.h>
#include <stdint.h>
uint64_t get_gtod_clock_time ()
{
struct timeval tv;
if (gettimeofday (&tv, NULL) == 0)
return (uint64_t) (tv.tv_sec * 1000000 + tv.tv_usec);
else
return 0;
}
Again, the time value is scaled down to microseconds.
SGI IRIX
IRIX has the clock_gettime call, but it lacks CLOCK_MONOTONIC. Instead it has its own monotonic clock source defined as CLOCK_SGI_CYCLE which you should use instead of CLOCK_MONOTONIC with clock_gettime.
Solaris and HP-UX
Solaris has its own high-resolution timer interface gethrtime which returns the current timer value in nanoseconds. Though the newer versions of Solaris may have clock_gettime, you can stick to gethrtime if you need to support old Solaris versions.
Usage is simple:
#include <sys/time.h>
void time_measure_example ()
{
hrtime_t prev_time_value, time_value;
hrtime_t time_diff;
/* Initial time */
prev_time_value = gethrtime ();
/* Do some work here */
/* Final time */
time_value = gethrtime ();
/* Time difference */
time_diff = time_value - prev_time_value;
}
HP-UX lacks clock_gettime, but it supports gethrtime which you should use in the same way as on Solaris.
BeOS
BeOS also has its own high-resolution timer interface system_time which returns the number of microseconds have elapsed since the computer was booted.
Example usage:
#include <kernel/OS.h>
void time_measure_example ()
{
bigtime_t prev_time_value, time_value;
bigtime_t time_diff;
/* Initial time */
prev_time_value = system_time ();
/* Do some work here */
/* Final time */
time_value = system_time ();
/* Time difference */
time_diff = time_value - prev_time_value;
}
OS/2
OS/2 has its own API to retrieve high-precision time stamps:
query a timer frequency (ticks per unit) with DosTmrQueryFreq (for GCC compiler):
#define INCL_DOSPROFILE
#define INCL_DOSERRORS
#include <os2.h>
#include <stdint.h>
ULONG freq;
DosTmrQueryFreq (&freq);
query the current ticks value with DosTmrQueryTime:
QWORD tcounter;
unit64_t time_low;
unit64_t time_high;
unit64_t timestamp;
if (DosTmrQueryTime (&tcounter) == NO_ERROR) {
time_low = (unit64_t) tcounter.ulLo;
time_high = (unit64_t) tcounter.ulHi;
timestamp = (time_high << 32) | time_low;
}
scale the ticks to elapsed time, i.e. to microseconds:
uint64_t usecs = (prev_timestamp - timestamp) / (freq / 1000000);
Example implementation
You can take a look at the plibsys library which implements all the described above strategies (see ptimeprofiler*.c for details).
timespec_get from C11
Returns up to nanoseconds, rounded to the resolution of the implementation.
Looks like an ANSI ripoff from POSIX' clock_gettime.
Example: a printf is done every 100ms on Ubuntu 15.10:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
static long get_nanos(void) {
struct timespec ts;
timespec_get(&ts, TIME_UTC);
return (long)ts.tv_sec * 1000000000L + ts.tv_nsec;
}
int main(void) {
long nanos;
long last_nanos;
long start;
nanos = get_nanos();
last_nanos = nanos;
start = nanos;
while (1) {
nanos = get_nanos();
if (nanos - last_nanos > 100000000L) {
printf("current nanos: %ld\n", nanos - start);
last_nanos = nanos;
}
}
return EXIT_SUCCESS;
}
The C11 N1570 standard draft 7.27.2.5 "The timespec_get function says":
If base is TIME_UTC, the tv_sec member is set to the number of seconds since an
implementation defined epoch, truncated to a whole value and the tv_nsec member is
set to the integral number of nanoseconds, rounded to the resolution of the system clock. (321)
321) Although a struct timespec object describes times with nanosecond resolution, the available
resolution is system dependent and may even be greater than 1 second.
C++11 also got std::chrono::high_resolution_clock: C++ Cross-Platform High-Resolution Timer
glibc 2.21 implementation
Can be found under sysdeps/posix/timespec_get.c as:
int
timespec_get (struct timespec *ts, int base)
{
switch (base)
{
case TIME_UTC:
if (__clock_gettime (CLOCK_REALTIME, ts) < 0)
return 0;
break;
default:
return 0;
}
return base;
}
so clearly:
only TIME_UTC is currently supported
it forwards to __clock_gettime (CLOCK_REALTIME, ts), which is a POSIX API: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
Linux x86-64 has a clock_gettime system call.
Note that this is not a fail-proof micro-benchmarking method because:
man clock_gettime says that this measure may have discontinuities if you change some system time setting while your program runs. This should be a rare event of course, and you might be able to ignore it.
this measures wall time, so if the scheduler decides to forget about your task, it will appear to run for longer.
For those reasons getrusage() might be a better better POSIX benchmarking tool, despite it's lower microsecond maximum precision.
More information at: Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?
The best precision you can possibly get is through the use of the x86-only "rdtsc" instruction, which can provide clock-level resolution (ne must of course take into account the cost of the rdtsc call itself, which can be measured easily on application startup).
The main catch here is measuring the number of clocks per second, which shouldn't be too hard.
The accepted answer is good enough.But my solution is more simple.I just test in Linux, use gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0.
Alse use gettimeofday, the tv_sec is the part of second, and the tv_usec is microseconds, not milliseconds.
long currentTimeMillis() {
struct timeval time;
gettimeofday(&time, NULL);
return time.tv_sec * 1000 + time.tv_usec / 1000;
}
int main() {
printf("%ld\n", currentTimeMillis());
// wait 1 second
sleep(1);
printf("%ld\n", currentTimeMillis());
return 0;
}
It print:
1522139691342
1522139692342, exactly a second.
^
As of ANSI/ISO C11 or later, you can use timespec_get() to obtain millisecond, microsecond, or nanosecond timestamps, like this:
#include <time.h>
/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)
/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns) ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns) ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns) ((ns)/1000)
/// Get a time stamp in milliseconds.
uint64_t millis()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
return ms;
}
/// Get a time stamp in microseconds.
uint64_t micros()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
return us;
}
/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
return ns;
}
// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.
For a much-more-thorough answer of mine, including with an entire timing library I wrote, see here: How to get a simple timestamp in C.
#Ciro Santilli Путлер also presents a concise demo of C11's timespec_get() function here, which is how I first learned how to use that function.
In my more-thorough answer, I explain that on my system, the best resolution possible is ~20ns, but the resolution is hardware-dependent and can vary from system to system.
Under windows:
SYSTEMTIME t;
GetLocalTime(&t);
swprintf_s(buff, L"[%02d:%02d:%02d:%d]\t", t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);

Resources