Can the time_t time(time_t *t) function ever return failure if the argument passed is always NULL?
If the call is time(NULL), do we still need to check for the return value?
The only documented error code is EFAULT, which relates to the pointer being invalid.
Yes. time has a documented may fail case:
The time() function may fail if:
[EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
Source: http://pubs.opengroup.org/onlinepubs/9699919799/functions/time.html
Expect this to happen in practice in about 22 years, no sooner, and not on 64-bit systems or 32-bit ones that utilize a 64-bit time_t.
Also, the presence of any shall fail or may fail cases also allows for implementation-defined errors, though their existence would be a serious quality-of-implementation flaw.
EFAULT is a non-issue/non-existent because it only happens when your program has undefined behavior.
So despite all this, in the real world, time is not actually going to fail.
Can time(NULL) ever return failure?
No. C standard says that
C11: 7.27.2.4:
The time function returns the implementation’s best approximation to the current calendar time. The value (time_t)(-1) is returned if the calendar time is not available.
I've checked on RHEL, SLES and UBTU; the man 2 page yields the same (relevant) thing:
time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds.
If t is non-NULL, the return value is also stored in the memory pointed to by t.
Anyway, going back to the original questions
Q0: Can the time_t time(time_t *t) function ever return failure if the argument passed is always NULL?
A/R0: Yes, if some very special events occurred (memory full, and so on, ...)
Q1: If the call is time(NULL), do we still need to check for the return value?
A/R1: The actual answer is "NO", you don't have to; the fact that the func could return something relevant, is a different story. After all, why calling a func ,if there's no need of doing so?
Q2: The only documented error code is EFAULT, which relates to the pointer being invalid.
You don't have anything to do with the invalid codes; as you said you're passing NULL, so there's no problem.
In the C standard, time() can return (time_t)(-1) if "the calendar time is not available". In the 1999 standard, for example, that is in Section 7.23.2.4, para 3.
Although that wording is less than specific, I would suggest it represents an error condition. Presumably an implementation can return (time_t)(-1) if it can't access the system clock, can't sensibly interpret the data it gets, etc.
R's answer describes what is the case for the posix spec.
(Not consider POSIX degraded mode functionality)
If the underlying real-time clock sub-system had a hardware fault like loss of clock integrity (battery) when the unit was off on an independent system, the return value of time() could certainly be (time_t) -1. In that case, it make no difference what the passed in time_t* was.
Related
The time function in the header time.h is defined by POSIX to return a time_t which can, evidently, be a signed int or some kind of floating point number.
http://en.cppreference.com/w/c/chrono/time
The function, however, returns (time_t)(-1) on error.
Under what circumstances can time fail?
Based on the signature, time_t time( time_t *arg ) it seems like the function shouldn't allocate, so that strikes one potential cause of failure.
The time() function is actually defined by ISO, to which POSIX mostly defers except it may place further restrictions on behaviour and/or properties (like an eight-bit byte, for example).
And, since the ISO C standard doesn't specify how time() may fail(a), the list of possibilities is not limited in any way:
One way in which it may fail is in the embedded arena. It's quite possible that your C program may be running on a device with no real-time clock or other clock hardware (even a counter), in which case no time would be available.
Or maybe the function detects bad clock hardware that's constantly jumping all over the place and is therefore unreliable.
Or maybe you're running in a real-time environment where accesses to the clock hardware are time-expensive so, if it detects you're doing it too often, it decides to start failing so your code can do what it's meant to be doing :-)
The possibilities are literally infinite and, of course, I mean 'literally' in a figurative sense rather than a literal one :-)
POSIX itself calls out explicitly that it will fail if it detects the value won't fit into a time_t variable:
The time() function may fail if: [EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
And, just on your comment:
Based on the signature, time_t time( time_t *arg ), it seems like the function shouldn't allocate.
You need to be circumspect about this. Anything not mandated by the standards is totally open to interpretation. For example, I can envisage a bizarre implementation that allocates space for an NTP request packet to go out to time.nist.somewhere.org so as to ensure all times are up to date even without an NTP client :-)
(a) In fact, it doesn't even specify what the definition of time_t is so it's unwise to limit it to an integer or floating point value, it could be the string representation of the number of fortnights since the big bang :-) All it requires is that it's usable by the other time.h functions and that it can be cast to -1 in the event of failure.
POSIX does state that it represents number of seconds (which ISO doesn't) but places no other restrictions on it.
I can imagine several causes:
the hardware timer isn't available, because the hardware doesn't support it.
the hardware timer just failed (hardware error, timer registers cannot be accessed for some reason)
arg is not null, but points to some illegal location. Instead of crashing, some implementations could detect an illegal pointer (or catch the resulting SEGV) and return an error instead.
in the provided link "Implementations in which time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.". So after 1<<31 seconds since the epoch (1/1/1970), time return value overflows (well, that is, if the hardware doesn't mask the problem by silently overflowing as well).
The time() function will return the seconds since 1970. I want to know how it does the rounding for the second returned.
For example, for 100.4s, will it return 100 or 101?
Is there a explicit definition?
The ISO C standard doesn't say much. It says only that time() returns
the implementation’s best approximation to the current calendar time
The result is of type time_t, which is a real type (integer or floating-point) "capable of representing times".
A lot of systems implement time_t as a signed integer type representing the whole number of seconds since 1970-01-01 00:00:00 GMT.
A quick experiment on my Ubuntu system (comparing the values returned by time() and gettimeofday()) indicates that the value returned by time() is truncated, so for example if the high-precision time is 1514866171.750058, the time() function will return 1514866171. Neither ISO C nor POSIX guarantees this, but I'd expect it to behave consistently on any UNIX-like systems.
7.27.2.4p3
The time function returns the implementation's best approximation to
the current calendar time. The value (time_t)(-1) is returned if the
calendar time is not available. If timer is not a null pointer, the
return value is also assigned to the object it points to.
It's implementation defined, so unless you specify your compiler and operating system, the answer is "best approximation".
Is there a explicit definition?
No
http://en.cppreference.com/w/c/chrono/time
The encoding of calendar time in time_t is unspecified, but most systems conform to POSIX specification and return a value of integral type holding the number of seconds since the Epoch. Implementations in which time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.
Quoting man 3 ftime:
This function is obsolete. Don't use it. If the time in seconds suffices, time(2) can be used; gettimeofday(2) gives microseconds;
clock_gettime(2) gives nanoseconds but is not as widely available.
Why should it not be used? What are the perils?
I understand that time(2), gettimeofday(2) and clock_gettime(2) can be used instead of ftime(3), but ftime(3) gives out exactly milliseconds, and this I find convenient, since milliseconds is the exact precision I need.
Such advice is intended to help you make your program portable and avoid various pitfalls. While the ftime function likely won't be removed from systems that have it, new systems your software gets ported to might not have it, and you may run into problems, e.g. if the system model of time zone evolves to something not conveniently expressible in the format of ftime's structure.
In my code, i need to call sigalrm after 2 seconds. However, when i use ualarm(2000000, 0 ) it doesn't work. using ualarm less than 1 second works. Whereas alarm(2) works. Is there a reason why ualarm should be used at times over alarm? Is there any way to get ualarm to work for over 1 second?
ualarm() is obsolete, and in fact has been removed from POSIX. Do not use it.
If you insist on using it anyway, the Linux manual page for it notes this:
The type useconds_t is an unsigned integer type capable of holding integers in the range [0,1000000].
, which I guess is a reference to the one-time POSIX specification. What you should take from that is that POSIX ualarm() was never guaranteed to be able to handle a first argument larger than 1000000. It's unclear whether any implementations ever did handle larger values, but the fact that POSIX specified it as it did suggests that at least some implementations did not.
Is there any way to get ualarm to work for over 1 second?
Since there is not (any longer) any standard for ualarm(), the answer is necessarily implementation-dependent. Based on what you presented, I'm inclined to think that with your implementation, the answer is "no".
According to the documentation the struct tm *gmtime(const time_t *timer); is supposed to convert the time_t pointed to by timer to a broken down time.
Now is there a reason why they decided to make the function take a pointer to the time_t instead of passing the time_t directly?
As far as I can see time_t is of arithmetic type and should therefore have been possible to pass directly (also I find it reasonable that it would have fit into a long). Also there seem to be any specific handling of the NULL pointer (which could have motivated passing a pointer).
Is there something I'm missing? Something still relevant today?
From what I've seen, it's more a historical quirk. When time.h was first introduced, and with it functions like time, it used a value that could not be returned (ie: no long int etc). The standard defined an obscure time_t type that still leaves a lot of room for vendors to implement in weird ways (it has to be an arithmetic type, but no ranges or max values are defined for example) - C11 standard:
7.27.1 Components of time.
[...]
3 .The types declared are size_t (described in 7.19);
clock_t
and
time_t
which are real types capable of representing times;
4. The range and precision of times representable in clock_t and time_t are
implementation-defined.
size_t is described, in C11, as "is the unsigned integer type of the result of the sizeof operator;"
In light of this, your comment ("I find it reasonable that it would have fit into a long") is an understandable one, but it's incorrect, or at the very least inaccurate. POSIX, for example requires time_t to be an integer or real floating type. A long fits that description, but so would a long double which doesn't fit in a long. A more accurate assumption to make would be that the minimum size of time_tis a 32bit int (until 2038 at least), but that time_t preferably is a 64bit type.
Anyway, back in those days, if a value couldn't be returned, the only alternative was to pass the memory to the function (which is a sensible thing to do).
This is why we have functions like
time_t time(time_t *t);
It doesn't really make sense to set the same value twice: once by returning it, and once using indirection, but the argument is there because originally, the function was defined as
time(time_t *t)
Note the lack of a return type, if time were added today, it'd either be defined as void time( time_t * ), or if the committee hadn't been drinking, and realized the absurdity of passing a pointer here, they'd define it as time_t time ( void );
Looking at the C11 standard regarding the time function, it seems that the emphasis of the functions' behaviour is on the return value. The pointer argument is mentioned briefly but it certainly isn't given any significance:
7.27.2.4 The time function
1. Synopsis
#include <time.h>
time_t time(time_t *timer);
2. Description
The time function determines the current calendar time. The encoding of the value is
unspecified.
3. Returns
The time function returns the implementation’s best approximation to the current
calendar time. The value (time_t)(-1) is returned if the calendar time is not
available. If timer is not a null pointer, the return value is also assigned to the object it
points to.
The main thing we can take from this is that, as far as the standard goes, the pointer is just a secondary way of getting at the return value of the function. Given that the return value indicates something went wrong ((time_t)(-1)), I'd argue that we should treat this function as though it was meant to be time_t time( void ).
But because the old implementation is still kicking around, and we've all gotten used to it, it's one of those things that should've been marked for deprecation, but because it never really was, it probably will be part of the C language for ever...
The only other reason why functions use time_t like this (const) is either historical, or to maintain a consistent API across the time.h API. AFAIK that is
TL;DR
Most time.h function use pointers to the time_t type for historical, compatibility, and consistency reasons.
I knew I read that stuff about the early days of the time function before, here's a related SO answer
The reason is probably that in the old days a parameter could be no larger than an integer, so you couldn't pass a long and had to do that as a pointer. The definition of the function never changed.