most efficient way to get current time/date/day in C - c

What is the most efficient way of getting current time/date/day/year in C language? As I have to execute this many times, I need a real efficient way.
I am on freeBSD.
thanks in advance.

/* ctime example */
#include <stdio.h>
#include <time.h>
int main ()
{
time_t rawtime;
time ( &rawtime );
printf ( "The current local time is: %s", ctime (&rawtime) );
return 0;
}
You can use ctime, if you need it as a string.

Standard C provides only one way to get the time - time() - which can be converted to a time/date/year with localtime() or gmtime(). So trivially, that must be the most efficient way.
Any other methods are operating-system specific, and you haven't told us what operating system you're using.

It really depends on what you mean by "many" :-)
I think you'll probably find that using the ISO standard time() and localtime() functions will be more than fast enough. For example, on my "Intel(R) Core(TM)2 Duo CPU E6850 # 3.00GHz", using unoptimised code, I can call time() ten million times in 1.045 seconds, and a time()/localtime() combination half a million times in 0.98 seconds. Whether that's fast enough for your needs, only you can decide, but I'm hard-pressed trying to come up with a use case that needs more grunt than that.
The time() function gives you the number of seconds since the epoch, while localtime() both converts it to local time (from UTC) and splits it into a more usable form, the struct tm structure.
#include <time.h>
time_t t = time (NULL);
struct tm* lt = localtime (&t);
// Use lt->tm_year, lt->tm_mday, and so forth.
Any attempt to cache the date/time and use other ways of finding out a delta to apply to it, such as with clock(), will almost invariably:
be slower; and
suffer from the fact you won't pick up external time changes.

The simplest is
#include <time.h>
//...
time_t current_time = time (NULL);
struct tm* local_time = localtime (&current_time);
printf ("the time is %s\n", asctime (local_time));

You can use gettimeofday() function to get time in seconds & microseconds which is (I think) very fast (as there is a similar function in Linux kernel do_gettimeofday()) and then you can convert it to your required format (might possible to use functions mentioned above for conversion.
I hope this helps.

Just about the only way (that's standard, anyway) is to call time followed by localtime or gmtime.

well, in general, directly accessing the OS's API to get the time is probably the most efficient, but not so portable.....
the C time functions are ok.
But really depends on your platform

Assuming a one second resolution is enough, the most efficient way on FreeBSD (or any POSIX system) is likely
Install a one second interval timer with setitimer (ITIMER_REAL, ...)
When it triggers the SIGALRM, update a static variable holding the currrent time
Use the value in the static variable whenever you need the time
Even if signals get lost due to system overload this will correct itself the next time the process is scheduled.

Related

Is it possible to get accurate time in C language?

I know there is <chrono> in c++ and std::chrono::high_resolution_clock::now() can get the exact time of type std::chrono::high_resolution_clock::time_point. However, I am using raw C language so it is impossible to call this function. The time() function can only get second level time.
My final target is to measure the running time of a multi-thread program. The clock() function will sum the clocks of all threads up so it can not be used to measure the running time.
The following snippet will give the exact system time :
time_t now;
time(&now);
struct tm* now_tm;
now_tm = localtime(&now);
char dte[80];
strftime (dte, 80, "%Y-%m-%d %I:%M:%S", now_tm);
Unfortunately, the higher-resolution clocks tend to differ a bit from platform to platform. It also depends what type of clock you're interested in (such as real time, CPU time, or monotonic time).
Based on your use case, you probably want monotonic timeā€¦
If you're on a POSIX operating system, clock_gettime is usually the way to go, but even then which clocks are available can vary a lot. For example, macOS doesn't support CLOCK_MONOTONIC, but it does have other APIs you can use to get the monotonic time.
If you're on Windows you'll probably want to use QueryPerformanceCounters.
If you want something to abstract away the differences, I put something together a while back.

Under what circumstances can time in time.h fail?

The time function in the header time.h is defined by POSIX to return a time_t which can, evidently, be a signed int or some kind of floating point number.
http://en.cppreference.com/w/c/chrono/time
The function, however, returns (time_t)(-1) on error.
Under what circumstances can time fail?
Based on the signature, time_t time( time_t *arg ) it seems like the function shouldn't allocate, so that strikes one potential cause of failure.
The time() function is actually defined by ISO, to which POSIX mostly defers except it may place further restrictions on behaviour and/or properties (like an eight-bit byte, for example).
And, since the ISO C standard doesn't specify how time() may fail(a), the list of possibilities is not limited in any way:
One way in which it may fail is in the embedded arena. It's quite possible that your C program may be running on a device with no real-time clock or other clock hardware (even a counter), in which case no time would be available.
Or maybe the function detects bad clock hardware that's constantly jumping all over the place and is therefore unreliable.
Or maybe you're running in a real-time environment where accesses to the clock hardware are time-expensive so, if it detects you're doing it too often, it decides to start failing so your code can do what it's meant to be doing :-)
The possibilities are literally infinite and, of course, I mean 'literally' in a figurative sense rather than a literal one :-)
POSIX itself calls out explicitly that it will fail if it detects the value won't fit into a time_t variable:
The time() function may fail if: [EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
And, just on your comment:
Based on the signature, time_t time( time_t *arg ), it seems like the function shouldn't allocate.
You need to be circumspect about this. Anything not mandated by the standards is totally open to interpretation. For example, I can envisage a bizarre implementation that allocates space for an NTP request packet to go out to time.nist.somewhere.org so as to ensure all times are up to date even without an NTP client :-)
(a) In fact, it doesn't even specify what the definition of time_t is so it's unwise to limit it to an integer or floating point value, it could be the string representation of the number of fortnights since the big bang :-) All it requires is that it's usable by the other time.h functions and that it can be cast to -1 in the event of failure.
POSIX does state that it represents number of seconds (which ISO doesn't) but places no other restrictions on it.
I can imagine several causes:
the hardware timer isn't available, because the hardware doesn't support it.
the hardware timer just failed (hardware error, timer registers cannot be accessed for some reason)
arg is not null, but points to some illegal location. Instead of crashing, some implementations could detect an illegal pointer (or catch the resulting SEGV) and return an error instead.
in the provided link "Implementations in which time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.". So after 1<<31 seconds since the epoch (1/1/1970), time return value overflows (well, that is, if the hardware doesn't mask the problem by silently overflowing as well).

Why should ftime not be used?

Quoting man 3 ftime:
This function is obsolete. Don't use it. If the time in seconds suffices, time(2) can be used; gettimeofday(2) gives microseconds;
clock_gettime(2) gives nanoseconds but is not as widely available.
Why should it not be used? What are the perils?
I understand that time(2), gettimeofday(2) and clock_gettime(2) can be used instead of ftime(3), but ftime(3) gives out exactly milliseconds, and this I find convenient, since milliseconds is the exact precision I need.
Such advice is intended to help you make your program portable and avoid various pitfalls. While the ftime function likely won't be removed from systems that have it, new systems your software gets ported to might not have it, and you may run into problems, e.g. if the system model of time zone evolves to something not conveniently expressible in the format of ftime's structure.

struct timespec not working in MSVC 2015 for x64 build [duplicate]

On posix it is possible to use timespec to calculate accurate time length (like seconds and milliseconds). Unfortunately I need to migrate to windows with Visual Studio compiler. The VS time.h library doesn't declare timespec so I'm looking for other options. As far as could search is it possible to use clock and time_t although I could't check how precise is counting millisecons with clock counting.
What do you do/use for calculating time elapse in a operation (if possible using standards c++ library) ?
The function GetTickCount is usually used for that.
Also a similiar thread: C++ timing, milliseconds since last whole second
Depends on what sort of accuracy you want, my understanding is that clock and time_t are not accurate to the millisecond level. Similarly GetTickCount() is commonly used (MS docs say accurate to 10-15ms) but not sufficiently accurate for many purposes.
I use QueryPerformanceFrequency and QueryPerformanceCounter for accurate timing measurements for performance.

caching localtime_r() is it worth it?

is it worth keeping a local copy of struct tm and update it only when required; below func is not thread safe... also I've seen only 6 to 7% of CPU time can be saved...
struct tm* custom_localtime (time_t now_sec)
{
static time_t cache_sec;
static struct tm tms;
if (now_sec != cache_sec) {
cache_sec = now_sec;
localtime_r(&cache_sec, &(tms));
}
return(&tms);
}
Additional details:
- my app makes more than 3000/sec calls to localtime_r()
found out at least 33% CPU time saving when I cache time-stamp strings of the format "2011-12-09 10:32:45" againt time_t seconds
thank you all nos, asc99c and Mircea.
I would probably have mentioned the 3000/s call rate in your question! Do it. I recently was profiling generation of a screen which was calling localtime approx 1,000,000 * 10,000 times.
The nested loops could have been improved substantially with a bit of thought, but what I saw was about 85% of CPU time was used by localtime. Simply caching the result so it was only called 10,000 times cut 85% of the time off page generation, and that made it easily fast enough.
"Avoiding a library function call that's not really needed" is worth it, of couse. The rest is only your tradeoff between memory and speed.
Since you're calling this 3000/second, you might want to go even further and put this function as static inline in a header and also (if using GCC) use branch prediction hints for the conditional, stating that taking it is "unlikely":
if (__builtin_expect(now_sec != cache_sec, 0))

Resources