The time function in the header time.h is defined by POSIX to return a time_t which can, evidently, be a signed int or some kind of floating point number.
http://en.cppreference.com/w/c/chrono/time
The function, however, returns (time_t)(-1) on error.
Under what circumstances can time fail?
Based on the signature, time_t time( time_t *arg ) it seems like the function shouldn't allocate, so that strikes one potential cause of failure.
The time() function is actually defined by ISO, to which POSIX mostly defers except it may place further restrictions on behaviour and/or properties (like an eight-bit byte, for example).
And, since the ISO C standard doesn't specify how time() may fail(a), the list of possibilities is not limited in any way:
One way in which it may fail is in the embedded arena. It's quite possible that your C program may be running on a device with no real-time clock or other clock hardware (even a counter), in which case no time would be available.
Or maybe the function detects bad clock hardware that's constantly jumping all over the place and is therefore unreliable.
Or maybe you're running in a real-time environment where accesses to the clock hardware are time-expensive so, if it detects you're doing it too often, it decides to start failing so your code can do what it's meant to be doing :-)
The possibilities are literally infinite and, of course, I mean 'literally' in a figurative sense rather than a literal one :-)
POSIX itself calls out explicitly that it will fail if it detects the value won't fit into a time_t variable:
The time() function may fail if: [EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
And, just on your comment:
Based on the signature, time_t time( time_t *arg ), it seems like the function shouldn't allocate.
You need to be circumspect about this. Anything not mandated by the standards is totally open to interpretation. For example, I can envisage a bizarre implementation that allocates space for an NTP request packet to go out to time.nist.somewhere.org so as to ensure all times are up to date even without an NTP client :-)
(a) In fact, it doesn't even specify what the definition of time_t is so it's unwise to limit it to an integer or floating point value, it could be the string representation of the number of fortnights since the big bang :-) All it requires is that it's usable by the other time.h functions and that it can be cast to -1 in the event of failure.
POSIX does state that it represents number of seconds (which ISO doesn't) but places no other restrictions on it.
I can imagine several causes:
the hardware timer isn't available, because the hardware doesn't support it.
the hardware timer just failed (hardware error, timer registers cannot be accessed for some reason)
arg is not null, but points to some illegal location. Instead of crashing, some implementations could detect an illegal pointer (or catch the resulting SEGV) and return an error instead.
in the provided link "Implementations in which time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038.". So after 1<<31 seconds since the epoch (1/1/1970), time return value overflows (well, that is, if the hardware doesn't mask the problem by silently overflowing as well).
Related
I'm working on a portable library for baremetal embedded applications.
Assume that I have a timer ISR that increments a counter and, in the main loop, this counter read is from in a most certainly not atomic load.
I'm trying to ensure load consistency (i.e. that I'm not reading garbage because the load was interrupted and the value changed) without resorting to disabling interrupts. It does not matter if the value changed after reading the counter as long as the read value is proper. Does this do the trick?
uint32_t read(volatile uint32_t *var){
uint32_t value;
do { value = *var; } while(value != *var);
return value;
}
It's highly unlikely that there's any sort of a portable solution for this, not least because plenty of C-only platforms are really C-only and use one-off compilers, i.e. nothing mainstream and modern-standards-compliant like gcc or clang. So if you're truly targeting entrenched C, then it's all quite platform-specific and not portable - to the point where "C99" support is a lost cause. The best you can expect for portable C code is ANSI C support - referring to the very first non-draft C standard published by ANSI. That is still, unfortunately, the common denominator - that major vendors get away with. I mean: Zilog somehow gets away with it, even if they are now but a division of Littelfuse, formerly a division of IXYS Semiconductor that Littelfuse had acquired.
For example, here are some compilers where there's only a platform-specific way of doing it:
Zilog eZ8 using a "recent" Zilog C compiler (anything 20 years old or less is OK): 8-bit value read-modify-write is atomic. 16-bit operations where the compiler generates word-aligned word instructions like LDWX, INCW, DECW are atomic as well. If the read-modify-write otherwise fits into 3 instructions or less, you'd prepend the operation with asm("\tATM");. Otherwise, you'd need to disable the interrupts: asm("\tPUSHF\n\tDI");, and subsequently re-enable them: asm("\tPOPF");.
Zilog ZNEO is a 16 bit platform with 32-bit registers, and read-modify-write accesses on registers are atomic, but memory read-modify-write round-trips via a register, usually, and takes 3 instructions - thus prepend the R-M-W operation with asm("\tATM").
Zilog Z80 and eZ80 require wrapping the code in asm("\tDI") and asm("\tEI"), although this is valid only when it's known that the interrupts are always enabled when your code runs. If they may not be enabled, then there's a problem since Z80 does not allow reading the state of IFF1 - the interrupt enable flip-flop. So you'd need to save a "shadow" of its state somewhere, and use that value to conditionally enable interrupts. Unfortunately, eZ80 does not provide an interrupt controller register that would allow access to IEF1 (eZ80 uses the IEFn nomenclature instead of IFFn) - so this architectural oversight is carried over from the venerable Z80 to the "modern" one.
Those aren't necessarily the most popular platforms out there, and many people don't bother with Zilog compilers due to their fairly poor quality (low enough that yours truly had to write an eZ8-targeting compiler*). Yet such odd corners are the mainstay of C-only code bases, and library code has no choice but to accommodate this, if not directly then at least by providing macros that can be redefined with platform-specific magic.
E.g. you could provide empty-by-default macros MYLIB_BEGIN_ATOMIC(vector) and MYLIB_END_ATOMIC(vector) that would be used to wrap code that requires access atomic with respect to a given interrupt vector (or e.g. -1 if with respect to all interrupt vectors). Naturally, replace MYLIB_ with a "namespace" prefix specific to your library.
To enable platform-specific optimizations such as ATM vs DI on "modern" Zilog platforms, an additional argument could be provided to the macro to separate the presumed "short" sequences that the compiler is apt to generate three-instruction sequences for vs. longer ones. Such micro-optimization requires usually an assembly output audit (easily automatable) to verify the assumption of the instruction sequence length, but at least the data to drive the decision would be available, and the user would have a choice of using it or ignoring it.
*If some lost soul wants to know anything bordering on the arcane re. eZ8 - ask away. I know entirely too much about that platform, in details so gory that even modern Hollywood CG and SFX would have a hard time reproducing the true depth of the experience on-screen. I'm also possibly the only one out there running the 20MHz eZ8 parts occasionally at 48MHz clock - as sure a sign of demonic possession as the multiverse allows. If you think it's outrageous that such depravity makes it into production hardware - I'm with you. Alas, business case is business case, laws of physics be damned.
Are you running on any systems that have uint32_t larger than a single assembly instruction word read/write size? If not, the IO to memory should be a single instructions and therefore atomic (assuming the bus is also word sized...) You get in trouble when the compiler breaks it up into multiple smaller read/writes. Otherwise, I've always had to resort to DI/EI. You could have the user configure your library such that it has information if atomic instructions or minimum 32-bit word size are available to prevent interrupt twiddling. If you have these guarantees, you don't need to verification code.
To answer the question though, on a system that must split the read/writes, your code is not safe. Imagine a case where you read your value in correctly in the "do" part, but the value gets split during the "while" part check. Further, in an extreme case, this is an infinite loop. For complete safety, you'd need a retry count and error condition to prevent that. The loop case is extreme for sure, but I'd want it just in case. That of course makes the run time longer.
Let's show a failure case for examples - will use 16-bit numbers on a machine that reads 8-bit values at a time to make it easier to follow:
Value to read from memory *var is 0x1234
Read 8-bit 0x12
*var becomes 0x5678
Read 8-bit 0x78 - value is now 0x1278 (invalid)
*var becomes 0x1234
Verification step reads 8-bit 0x12
*var becomes 0x5678
Verification reads 8-bit 0x78
Value confirmed correct 0x1278, but this is an error as *var was only 0x1234 and 0x5678.
Another failure case would be when *var just happens to change at the same frequency as your code is running, which could lead to an infinite loop as each verification fails. Or even if it did break out eventually, this would be a very hard to track performance bug.
Quoting man 3 ftime:
This function is obsolete. Don't use it. If the time in seconds suffices, time(2) can be used; gettimeofday(2) gives microseconds;
clock_gettime(2) gives nanoseconds but is not as widely available.
Why should it not be used? What are the perils?
I understand that time(2), gettimeofday(2) and clock_gettime(2) can be used instead of ftime(3), but ftime(3) gives out exactly milliseconds, and this I find convenient, since milliseconds is the exact precision I need.
Such advice is intended to help you make your program portable and avoid various pitfalls. While the ftime function likely won't be removed from systems that have it, new systems your software gets ported to might not have it, and you may run into problems, e.g. if the system model of time zone evolves to something not conveniently expressible in the format of ftime's structure.
According to the documentation the struct tm *gmtime(const time_t *timer); is supposed to convert the time_t pointed to by timer to a broken down time.
Now is there a reason why they decided to make the function take a pointer to the time_t instead of passing the time_t directly?
As far as I can see time_t is of arithmetic type and should therefore have been possible to pass directly (also I find it reasonable that it would have fit into a long). Also there seem to be any specific handling of the NULL pointer (which could have motivated passing a pointer).
Is there something I'm missing? Something still relevant today?
From what I've seen, it's more a historical quirk. When time.h was first introduced, and with it functions like time, it used a value that could not be returned (ie: no long int etc). The standard defined an obscure time_t type that still leaves a lot of room for vendors to implement in weird ways (it has to be an arithmetic type, but no ranges or max values are defined for example) - C11 standard:
7.27.1 Components of time.
[...]
3 .The types declared are size_t (described in 7.19);
clock_t
and
time_t
which are real types capable of representing times;
4. The range and precision of times representable in clock_t and time_t are
implementation-defined.
size_t is described, in C11, as "is the unsigned integer type of the result of the sizeof operator;"
In light of this, your comment ("I find it reasonable that it would have fit into a long") is an understandable one, but it's incorrect, or at the very least inaccurate. POSIX, for example requires time_t to be an integer or real floating type. A long fits that description, but so would a long double which doesn't fit in a long. A more accurate assumption to make would be that the minimum size of time_tis a 32bit int (until 2038 at least), but that time_t preferably is a 64bit type.
Anyway, back in those days, if a value couldn't be returned, the only alternative was to pass the memory to the function (which is a sensible thing to do).
This is why we have functions like
time_t time(time_t *t);
It doesn't really make sense to set the same value twice: once by returning it, and once using indirection, but the argument is there because originally, the function was defined as
time(time_t *t)
Note the lack of a return type, if time were added today, it'd either be defined as void time( time_t * ), or if the committee hadn't been drinking, and realized the absurdity of passing a pointer here, they'd define it as time_t time ( void );
Looking at the C11 standard regarding the time function, it seems that the emphasis of the functions' behaviour is on the return value. The pointer argument is mentioned briefly but it certainly isn't given any significance:
7.27.2.4 The time function
1. Synopsis
#include <time.h>
time_t time(time_t *timer);
2. Description
The time function determines the current calendar time. The encoding of the value is
unspecified.
3. Returns
The time function returns the implementation’s best approximation to the current
calendar time. The value (time_t)(-1) is returned if the calendar time is not
available. If timer is not a null pointer, the return value is also assigned to the object it
points to.
The main thing we can take from this is that, as far as the standard goes, the pointer is just a secondary way of getting at the return value of the function. Given that the return value indicates something went wrong ((time_t)(-1)), I'd argue that we should treat this function as though it was meant to be time_t time( void ).
But because the old implementation is still kicking around, and we've all gotten used to it, it's one of those things that should've been marked for deprecation, but because it never really was, it probably will be part of the C language for ever...
The only other reason why functions use time_t like this (const) is either historical, or to maintain a consistent API across the time.h API. AFAIK that is
TL;DR
Most time.h function use pointers to the time_t type for historical, compatibility, and consistency reasons.
I knew I read that stuff about the early days of the time function before, here's a related SO answer
The reason is probably that in the old days a parameter could be no larger than an integer, so you couldn't pass a long and had to do that as a pointer. The definition of the function never changed.
Can the time_t time(time_t *t) function ever return failure if the argument passed is always NULL?
If the call is time(NULL), do we still need to check for the return value?
The only documented error code is EFAULT, which relates to the pointer being invalid.
Yes. time has a documented may fail case:
The time() function may fail if:
[EOVERFLOW] The number of seconds since the Epoch will not fit in an object of type time_t.
Source: http://pubs.opengroup.org/onlinepubs/9699919799/functions/time.html
Expect this to happen in practice in about 22 years, no sooner, and not on 64-bit systems or 32-bit ones that utilize a 64-bit time_t.
Also, the presence of any shall fail or may fail cases also allows for implementation-defined errors, though their existence would be a serious quality-of-implementation flaw.
EFAULT is a non-issue/non-existent because it only happens when your program has undefined behavior.
So despite all this, in the real world, time is not actually going to fail.
Can time(NULL) ever return failure?
No. C standard says that
C11: 7.27.2.4:
The time function returns the implementation’s best approximation to the current calendar time. The value (time_t)(-1) is returned if the calendar time is not available.
I've checked on RHEL, SLES and UBTU; the man 2 page yields the same (relevant) thing:
time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds.
If t is non-NULL, the return value is also stored in the memory pointed to by t.
Anyway, going back to the original questions
Q0: Can the time_t time(time_t *t) function ever return failure if the argument passed is always NULL?
A/R0: Yes, if some very special events occurred (memory full, and so on, ...)
Q1: If the call is time(NULL), do we still need to check for the return value?
A/R1: The actual answer is "NO", you don't have to; the fact that the func could return something relevant, is a different story. After all, why calling a func ,if there's no need of doing so?
Q2: The only documented error code is EFAULT, which relates to the pointer being invalid.
You don't have anything to do with the invalid codes; as you said you're passing NULL, so there's no problem.
In the C standard, time() can return (time_t)(-1) if "the calendar time is not available". In the 1999 standard, for example, that is in Section 7.23.2.4, para 3.
Although that wording is less than specific, I would suggest it represents an error condition. Presumably an implementation can return (time_t)(-1) if it can't access the system clock, can't sensibly interpret the data it gets, etc.
R's answer describes what is the case for the posix spec.
(Not consider POSIX degraded mode functionality)
If the underlying real-time clock sub-system had a hardware fault like loss of clock integrity (battery) when the unit was off on an independent system, the return value of time() could certainly be (time_t) -1. In that case, it make no difference what the passed in time_t* was.
I'm using this function:
__delay_cycles(var);
and I get the following error:
Argument to _delay_cycles must be a constant expression
Fair enough! But how can I bypass this? I have to delay my program with a different value every time. I receive my data from RS232 and I sore it in an int variable. I have to use this function and I can't modify its structure. I'm using AtMega16.
One suggestion that immediately springs to mind is to call __delay_cycles() with a constant argument, but do it in a loop, and vary the number of loop iterations.
The loop will add some overhead, so if you need precision you'll have to subtract the (constant) cost of one loop iteration from the (constant) argument to __delay_cycles().
Don't use that function. It is apparently some non-standard Texas junk that doesn't behave according to the rules of the C language. Write your own delay function using on-chip timers instead, or find one on the net. Takes less than 1 hour of work, which is no doubt less time than you will spend pondering the meaning of various non-standard junk.
The real reason why the embedded industry have so many crappy compilers, is because embedded programmers accept to be constantly fed with non-standard junk, even when there is no reason what-so-ever to deviate from the C standard.
if(var==1)
__delay_cycles(1);
else if(var==2)
__delay_cycles(2);
else if(var==3)
__delay_cycles(3);
...and so on.