The time() function will return the seconds since 1970. I want to know how it does the rounding for the second returned.
For example, for 100.4s, will it return 100 or 101?
Is there a explicit definition?
The ISO C standard doesn't say much. It says only that time() returns
the implementation’s best approximation to the current calendar time
The result is of type time_t, which is a real type (integer or floating-point) "capable of representing times".
A lot of systems implement time_t as a signed integer type representing the whole number of seconds since 1970-01-01 00:00:00 GMT.
A quick experiment on my Ubuntu system (comparing the values returned by time() and gettimeofday()) indicates that the value returned by time() is truncated, so for example if the high-precision time is 1514866171.750058, the time() function will return 1514866171. Neither ISO C nor POSIX guarantees this, but I'd expect it to behave consistently on any UNIX-like systems.
7.27.2.4p3
The time function returns the implementation's best approximation to
the current calendar time. The value (time_t)(-1) is returned if the
calendar time is not available. If timer is not a null pointer, the
return value is also assigned to the object it points to.
It's implementation defined, so unless you specify your compiler and operating system, the answer is "best approximation".
Is there a explicit definition?
No
http://en.cppreference.com/w/c/chrono/time
The encoding of calendar time in time_t is unspecified, but most systems conform to POSIX specification and return a value of integral type holding the number of seconds since the Epoch. Implementations in which time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.
Related
Because the compiler the code will run on doesn't accept _mkgmtime and only mktime, I am forced to use mktime to convert broken down time to Unix TimeStamp and viceversa.
The old solution was to use _mkgmtime and gmtime to convert from broken down time to UNIX timestamp and viceversa. This worked until I tried to compile it and use it on my microcontroller.
Now, I have to somehow use mktime to generate UNIX timestamp from broken-down time and then to convert from broken-down time to UNIX timestamp. Both in UTC
Is it possible to force mktime() to return a timestamp in UTC always?
The C language specification says that the return value of mktime() is encoded the same way as that of time(), but it explicitly leaves that encoding unspecified. Thus, the answer depends on the C implementation where your code will run.
On a POSIX system such as Linux, time() returns an integer number of seconds since the epoch, which is defined in terms of UTC, not local time. Therefore, if your target machine is such a system then you don't need to do anything to get mktime to return a UTC timestamp.
HOWEVER, mktime assumes that its input is expressed in broken-down local time, and it will use the configured time zone (which is not included in the broken-down time) to perform the calculation. How the local time zone is configured is system dependent.
The time function in the header time.h is defined by POSIX to return a time_t which can, evidently, be a signed int or some kind of floating point number.
http://en.cppreference.com/w/c/chrono/time
The function, however, returns (time_t)(-1) on error.
Under what circumstances can time fail?
Based on the signature, time_t time( time_t *arg ) it seems like the function shouldn't allocate, so that strikes one potential cause of failure.
The time() function is actually defined by ISO, to which POSIX mostly defers except it may place further restrictions on behaviour and/or properties (like an eight-bit byte, for example).
And, since the ISO C standard doesn't specify how time() may fail(a), the list of possibilities is not limited in any way:
One way in which it may fail is in the embedded arena. It's quite possible that your C program may be running on a device with no real-time clock or other clock hardware (even a counter), in which case no time would be available.
Or maybe the function detects bad clock hardware that's constantly jumping all over the place and is therefore unreliable.
Or maybe you're running in a real-time environment where accesses to the clock hardware are time-expensive so, if it detects you're doing it too often, it decides to start failing so your code can do what it's meant to be doing :-)
The possibilities are literally infinite and, of course, I mean 'literally' in a figurative sense rather than a literal one :-)
POSIX itself calls out explicitly that it will fail if it detects the value won't fit into a time_t variable:
The time() function may fail if: [EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
And, just on your comment:
Based on the signature, time_t time( time_t *arg ), it seems like the function shouldn't allocate.
You need to be circumspect about this. Anything not mandated by the standards is totally open to interpretation. For example, I can envisage a bizarre implementation that allocates space for an NTP request packet to go out to time.nist.somewhere.org so as to ensure all times are up to date even without an NTP client :-)
(a) In fact, it doesn't even specify what the definition of time_t is so it's unwise to limit it to an integer or floating point value, it could be the string representation of the number of fortnights since the big bang :-) All it requires is that it's usable by the other time.h functions and that it can be cast to -1 in the event of failure.
POSIX does state that it represents number of seconds (which ISO doesn't) but places no other restrictions on it.
I can imagine several causes:
the hardware timer isn't available, because the hardware doesn't support it.
the hardware timer just failed (hardware error, timer registers cannot be accessed for some reason)
arg is not null, but points to some illegal location. Instead of crashing, some implementations could detect an illegal pointer (or catch the resulting SEGV) and return an error instead.
in the provided link "Implementations in which time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.". So after 1<<31 seconds since the epoch (1/1/1970), time return value overflows (well, that is, if the hardware doesn't mask the problem by silently overflowing as well).
According to the documentation the struct tm *gmtime(const time_t *timer); is supposed to convert the time_t pointed to by timer to a broken down time.
Now is there a reason why they decided to make the function take a pointer to the time_t instead of passing the time_t directly?
As far as I can see time_t is of arithmetic type and should therefore have been possible to pass directly (also I find it reasonable that it would have fit into a long). Also there seem to be any specific handling of the NULL pointer (which could have motivated passing a pointer).
Is there something I'm missing? Something still relevant today?
From what I've seen, it's more a historical quirk. When time.h was first introduced, and with it functions like time, it used a value that could not be returned (ie: no long int etc). The standard defined an obscure time_t type that still leaves a lot of room for vendors to implement in weird ways (it has to be an arithmetic type, but no ranges or max values are defined for example) - C11 standard:
7.27.1 Components of time.
[...]
3 .The types declared are size_t (described in 7.19);
clock_t
and
time_t
which are real types capable of representing times;
4. The range and precision of times representable in clock_t and time_t are
implementation-defined.
size_t is described, in C11, as "is the unsigned integer type of the result of the sizeof operator;"
In light of this, your comment ("I find it reasonable that it would have fit into a long") is an understandable one, but it's incorrect, or at the very least inaccurate. POSIX, for example requires time_t to be an integer or real floating type. A long fits that description, but so would a long double which doesn't fit in a long. A more accurate assumption to make would be that the minimum size of time_tis a 32bit int (until 2038 at least), but that time_t preferably is a 64bit type.
Anyway, back in those days, if a value couldn't be returned, the only alternative was to pass the memory to the function (which is a sensible thing to do).
This is why we have functions like
time_t time(time_t *t);
It doesn't really make sense to set the same value twice: once by returning it, and once using indirection, but the argument is there because originally, the function was defined as
time(time_t *t)
Note the lack of a return type, if time were added today, it'd either be defined as void time( time_t * ), or if the committee hadn't been drinking, and realized the absurdity of passing a pointer here, they'd define it as time_t time ( void );
Looking at the C11 standard regarding the time function, it seems that the emphasis of the functions' behaviour is on the return value. The pointer argument is mentioned briefly but it certainly isn't given any significance:
7.27.2.4 The time function
1. Synopsis
#include <time.h>
time_t time(time_t *timer);
2. Description
The time function determines the current calendar time. The encoding of the value is
unspecified.
3. Returns
The time function returns the implementation’s best approximation to the current
calendar time. The value (time_t)(-1) is returned if the calendar time is not
available. If timer is not a null pointer, the return value is also assigned to the object it
points to.
The main thing we can take from this is that, as far as the standard goes, the pointer is just a secondary way of getting at the return value of the function. Given that the return value indicates something went wrong ((time_t)(-1)), I'd argue that we should treat this function as though it was meant to be time_t time( void ).
But because the old implementation is still kicking around, and we've all gotten used to it, it's one of those things that should've been marked for deprecation, but because it never really was, it probably will be part of the C language for ever...
The only other reason why functions use time_t like this (const) is either historical, or to maintain a consistent API across the time.h API. AFAIK that is
TL;DR
Most time.h function use pointers to the time_t type for historical, compatibility, and consistency reasons.
I knew I read that stuff about the early days of the time function before, here's a related SO answer
The reason is probably that in the old days a parameter could be no larger than an integer, so you couldn't pass a long and had to do that as a pointer. The definition of the function never changed.
Can the time_t time(time_t *t) function ever return failure if the argument passed is always NULL?
If the call is time(NULL), do we still need to check for the return value?
The only documented error code is EFAULT, which relates to the pointer being invalid.
Yes. time has a documented may fail case:
The time() function may fail if:
[EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
Source: http://pubs.opengroup.org/onlinepubs/9699919799/functions/time.html
Expect this to happen in practice in about 22 years, no sooner, and not on 64-bit systems or 32-bit ones that utilize a 64-bit time_t.
Also, the presence of any shall fail or may fail cases also allows for implementation-defined errors, though their existence would be a serious quality-of-implementation flaw.
EFAULT is a non-issue/non-existent because it only happens when your program has undefined behavior.
So despite all this, in the real world, time is not actually going to fail.
Can time(NULL) ever return failure?
No. C standard says that
C11: 7.27.2.4:
The time function returns the implementation’s best approximation to the current calendar time. The value (time_t)(-1) is returned if the calendar time is not available.
I've checked on RHEL, SLES and UBTU; the man 2 page yields the same (relevant) thing:
time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds.
If t is non-NULL, the return value is also stored in the memory pointed to by t.
Anyway, going back to the original questions
Q0: Can the time_t time(time_t *t) function ever return failure if the argument passed is always NULL?
A/R0: Yes, if some very special events occurred (memory full, and so on, ...)
Q1: If the call is time(NULL), do we still need to check for the return value?
A/R1: The actual answer is "NO", you don't have to; the fact that the func could return something relevant, is a different story. After all, why calling a func ,if there's no need of doing so?
Q2: The only documented error code is EFAULT, which relates to the pointer being invalid.
You don't have anything to do with the invalid codes; as you said you're passing NULL, so there's no problem.
In the C standard, time() can return (time_t)(-1) if "the calendar time is not available". In the 1999 standard, for example, that is in Section 7.23.2.4, para 3.
Although that wording is less than specific, I would suggest it represents an error condition. Presumably an implementation can return (time_t)(-1) if it can't access the system clock, can't sensibly interpret the data it gets, etc.
R's answer describes what is the case for the posix spec.
(Not consider POSIX degraded mode functionality)
If the underlying real-time clock sub-system had a hardware fault like loss of clock integrity (battery) when the unit was off on an independent system, the return value of time() could certainly be (time_t) -1. In that case, it make no difference what the passed in time_t* was.
I'm trying to use datetime type as a key in b-tree BerkeleyDB database. My goals:
minimum overhead for datetime storage
key comparison by date (to retrieve range)
reasonable speed
How to represent datetime in most compact form and use default bsddb's key comparison algorithm?
Is it hard to do this in C and create small Python extension for such tasks? I'm not experienced in C and only able to understand small C snippets (and copy-paste them).
What range of datetime values are you interested in? And what resolution on the time?
As fge indicated in a comment, if you want 1 second resolution over a period limited to 1902-2037, then you can use a 32-bit signed integer and the number of seconds since the Unix Epoch, which is 1970-01-01 00:00:00 +00:00 (midnight on 1st January 1970 in UTC). If you want a wider range, then you should probably use a 64-bit signed integer relative to the Unix Epoch. If you want sub-second accuracy, store a 32-bit signed integer which is the number of nanoseconds. Note that for a negative time (before 1970), the fractional seconds should be negative too.
One reason for suggesting these representations is that the value can easily be found via standard Unix (POSIX) interfaces, such as time() for 1-second resolution and clock_gettime() for nanosecond resolution or gettimeofday() for microsecond resolution.