MinGW localtime_r works in one time zone, fails in another - c

I have the following file test.c:
#define _POSIX_THREAD_SAFE_FUNCTIONS
#include <time.h>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <inttypes.h>
int main(int argc,char**argv) {
struct tm t1, t2, t3;
time_t w1, w2, w3;
memset(&t1,0,sizeof(struct tm));
memset(&t2,0,sizeof(struct tm));
memset(&t3,0,sizeof(struct tm));
w1 = 0;
errno = 0;
localtime_r(&w1,&t1);
printf("localtime_r: errno=%d\n",errno);
errno = 0;
w2 = mktime(&t1);
printf("mktime: errno=%d result=%" PRId64 "\n",errno,((int64_t)w2));
errno = 0;
localtime_r(&w2,&t2);
printf("localtime_r: errno=%d\n",errno);
errno = 0;
w3 = mktime(&t2);
printf("mktime: errno=%d result=%" PRId64 "\n",errno,((int64_t)w3));
errno = 0;
localtime_r(&w3,&t3);
printf("localtime_r: errno=%d\n",errno);
printf("sizeof(time_t)=%" PRId64 "\n", ((int64_t)sizeof(time_t)));
printf("W1=%" PRId64 " W2=%" PRId64 " W3=%" PRId64 "\n",((int64_t)w1),((int64_t)w2),((int64_t)w3));
printf("Y1=%d Y2=%d Y3=%d\n",t1.tm_year,t2.tm_year,t3.tm_year);
return 0;
}
I compile it like this:
i686-w64-mingw32-gcc -D__MINGW_USE_VC2005_COMPAT=1 -o test.exe test.c
Note, i686-w64-mingw32-gcc --version reports 8.3-win32 20190406
This is running in a Docker image of Ubuntu 19.04, using the MinGW version
that comes with Ubuntu 19.04 (it says version 6.0.0-3).
I have a Windows 10 VM (Version 1809 OS Build 17763.379).
By default, time zone is set to US Pacific Time (UTC-8).
I copy test.exe to this VM and run it there.
It prints:
localtime_r: errno=0
mktime: errno=0 result=0
localtime_r: errno=0
mktime: errno=0 result=0
localtime_r: errno=0
sizeof(time_t)=8
W1=0 W2=0 W3=0
Y1=69 Y2=69 Y3=69
That's the expected result. (At UTC midnight on 1 Jan 1970, it was still 1969 in UTC-8.)
I change the Windows time zone to UTC+10 (Canberra, Melbourne, Sydney).
Run it again. It prints:
localtime_r: errno=0
mktime: errno=0 result=47244640256
localtime_r: errno=22
mktime: errno=22 result=4294967295
localtime_r: errno=0
sizeof(time_t)=8
W1=0 W2=47244640256 W3=4294967295
Y1=70 Y2=-1 Y3=206
It seems the mktime() call is returning an invalid value in UTC+10 time zone, but returns the correct value of 0 in UTC-8 time zone.
Why does this code work in one timezone break in another?
Note, this is only a problem with -D__MINGW_USE_VC2005_COMPAT=1 to enable
64-bit time_t. If I leave that out, which means 32-bit time_t, then the code
works in both timezones. (But, 32-bit time_t is not a good idea, because it breaks in the year 2038, and that's less than twenty years away now.)

I worked out the cause of the problem. Sander De Dycker's suggestion, that mktime is returning a 32-bit value, is correct.
The problem is basically this: the MSVCRT defines three mktime functions: _mktime32 for 32-bit time_t, _mktime64 for 64-bit time_t, and _mktime which is a legacy alias for _mktime32.
_mingw.h does a #define _USE_32BIT_TIME_T in 32-bit code unless you #define __MINGW_USE_VC2005_COMPAT to disable that. Once you have #define __MINGW_USE_VC2005_COMPAT, then localtime_s is defined as an inline function which calls _localtime64_s. And #define _POSIX_THREAD_SAFE_FUNCTIONS defines localtime_r as an inline function which calls localtime_s. However, mktime is still 32-bit. To get 64-bit mktime, you need to also #define __MSVCRT_VERSION__ 0x1400 (or higher). Once you do that, mktime becomes an inline function which calls _mktime64. Before that, mktime is a normal function declaration which is linked to the legacy 32-bit mktime.
So #define __MINGW_USE_VC2005_COMPAT 1 without #define __MSVCRT_VERSION__ 0x1400 (or -D equivalent) gives you a localtime_r with 64-bit time_t, but a mktime with 32-bit time_t, which obviously won't work. Even worse than that, the actual implementation of the mktime symbol is returning a 32-bit time_t, but the function declaration is for a 64-bit time_t, which is what causes the junk in the upper 32-bits.
As to the difference behaviour in different time zones, I don't have a complete explanation for that, but I think the reason is likely as follows: when you have a function which actually returns a 32-bit value but is incorrectly being defined to return a 64-bit value, the upper 32-bits of the return value will hold random junk data left over from previous calculations. So, any difference in the previous calculations, or slightly different code paths, may result in different random junk. With a UTC-8 timezone, for whatever reason, the random junk is coincidentally zero, so the code (despite its incorrectness) actually works. With a UTC+10 timezone, the random junk turns out to be non-zero, which causes the rest of the code to stop working.

Related

Compiling old C code Y2038 conform still results in 4 byte variables

According to this overview in order to compile Y2038 conform old code, we just need to add the preprocessor macro __USE_TIME_BITS64 to gcc, but that does not seem to work on an ARMv7 board with Debian 12 (bookworm):
#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <unistd.h>
int main(void)
{
struct stat sb;
printf("sizeof time_t: %zu\n", sizeof(time_t));
printf("sizeof stat timestamp: %zu\n", sizeof(sb.st_atime));
return 0;
}
time_t is still 4 bytes:
root#debian:~# gcc -D__USE_TIME_BITS64 time.c -o time
root#debian:~# ./time
sizeof time_t: 4
sizeof stat timestamp: 4
root#debian:~#
glibc is 2.33, what am I doing wrong here?
According to this post (which is getting a little old now, and some parts of which are probably no longer relevant):
... defining _TIME_BITS=64 would cause all time functions to use 64-bit times by default. The _TIME_BITS=64 option is implemented by transparently mapping the standard functions and types to their internal 64-bit variants. Glibc would also set __USE_TIME_BITS64, which user code can test for to determine if the 64-bit variants are available.
Presumably, this includes making time_t 64 bit.
So if your version of glibc supports this at all, it looks like you're setting the wrong macro. You want:
-D_TIME_BITS=64

How to get the timezone in C99?

I compile C using the C99 version, and I want to try and output the timezone of the given time.
The IDE I use gives GMT+0 as the timezone, but I want to somehow output it with struct tm.
So I followed the instructions from this answer and made this program:
#include <stdio.h>
#include <time.h>
int main()
{
time_t present = time(NULL);
struct tm now = *localtime(&present);
now.tm_mon += 1;
now.tm_year += 1900;
struct tm t = {0};
localtime_r(&present, &t);
printf("%i/%i/%i %i:%i:%i from %s\n", now.tm_mon, now.tm_mday, now.tm_year, now.tm_hour, now.tm_min, now.tm_sec, t.tm_zone);
}
And it seems like I got 2 errors here:
implicit declaration of function 'localtime_r' is invalid in C99
no member named 'tm_zone' in 'struct tm'
So I checked the IDE Manual, and find that localtime_r actually exists, and is part of the <time.h> library.
So now I'm wondering if the IDE's confused or something. I don't know how to fix it either.
This might get closed as it might "need debugging details", but read more.
Because of this whole situation, how can I get the timezone (maybe even the offset) in C99 and get it to be outputted with printf()?
First, localtime_r is not part of the standard library - it's an extension offered by some implementations, and by default its declaration is not exposed in those implementations. To make it available, you'll have to define the macro _POSIX_SOURCE before including time.h to make it available. An easy way to do that is on the command line, like so:
gcc -o tz -D_POSIX_SOURCE -std=c11 -pedantic -Wall -Werror tz.c
otherwise, just define it in your source before including time.h:
#define _POSIX_SOURCE
#include <stdio.h>
#include <time.h>
Secondly, if all you're interested in is the local time zone then there's an easier way to do this - get the current time:
time_t t = time( NULL );
then use both localtime and gmtime to get the broken down time for the current time zone and UTC:
struct tm *local = localtime( &t );
struct tm *zulu = gmtime( &t );
Then compute the difference between the tm_hour members of local and zulu, and that's your time zone.
int tz = zulu->tm_hour - local->tm_hour;
You'll want to check local->tm_isdst to account for daylight savings, but that should at least get you started.

CLOCK_REALTIME nanosecond precision support in kernel

I wrote a simple program to determine if i can get nanosecond precision on my system, which is a RHEL 5.5 VM (kernel 2.6.18-194).
// cc -g -Wall ntime.c -o ntime -lrt
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
struct timespec spec;
printf("CLOCK_REALTIME - \"Systemwide realtime clock.\":\n");
clock_getres(CLOCK_REALTIME, &spec);
printf("\tprecision: %ldns\n", spec.tv_nsec);
clock_gettime(CLOCK_REALTIME, &spec);
printf("\tvalue : %010ld.%-ld\n", spec.tv_sec, spec.tv_nsec);
printf("CLOCK_MONOTONIC - \"Represents monotonic time. Cannot be set.\":\n");
clock_getres(CLOCK_MONOTONIC, &spec);
printf("\tprecision: %ldns\n", spec.tv_nsec);
clock_gettime(CLOCK_MONOTONIC, &spec);
printf("\tvalue : %010ld.%-ld\n", spec.tv_sec, spec.tv_nsec);
return 0;
}
A sample output:
CLOCK_REALTIME - "Systemwide realtime clock.":
precision: 999848ns
value : 1504781052.328111000
CLOCK_MONOTONIC - "Represents monotonic time. Cannot be set.":
precision: 999848ns
value : 0026159205.299686941
So REALTIME gives me the local time and MONOTONIC the system's uptime. Both clocks seem to have a μs precision (999848ns ≅ 1ms), even though MONOTONIC outputs in nanoseconds, which is confusing.
man clock_gettime states:
CLOCK_REALTIME_HR
High resolution version of CLOCK_REALTIME.
However, grep -R CLOCK_REALTIME_HR /usr/include/ | wc -l returns 0 and trying to compile results in error: ‘CLOCK_REALTIME_HR’ undeclared (first use in this function).
I was trying to determine if i could get the local time in nanosecond precision, but either my code has a bug or maybe this feature isn't entirely supported in 5.5 (or the VM's HPET is off, or something else).
Can i get local time in nanoseconds in this system? What am i doing wrong?
EDIT
Well the answer seems to be No.
While nanosecond precision can be achieved, the system doesn't guarantee nanosecond accuracy in this scenario (here's a clear answer on the difference rather than a rant). Typical COTS hardware doesn't really handle it (another answer in the right direction).
I'm still curious as to why do the clocks report the same clock_getres resolution yet MONOTONIC yields what seems to be nanosecond values while REALTIME yields microseconds.
RHEL5 is really ancient at this point, you should consider upgrading. On a newer system (Ubuntu 16.04) your program produces:
CLOCK_REALTIME - "Systemwide realtime clock.":
precision: 1ns
value : 1504783164.686220185
CLOCK_MONOTONIC - "Represents monotonic time. Cannot be set.":
precision: 1ns
value : 0000537257.257923964

Why strptime c-function changes the structure?

Incomprehensible behavior of the function strptime():
#define _XOPEN_SOURCE
#include <stdio.h>
#include <time.h>
double getPeriod(char * dateStart, char * dateStop) {
struct tm tmStart, tmStop;
time_t timeStampStart, timeStampStop;
strptime(dateStart, "%Y-%m-%d %H:%M:%S", &tmStart);
strptime(dateStop, "%Y-%m-%d %H:%M:%S", &tmStop);
timeStampStart = mktime(&tmStart);
timeStampStop = mktime(&tmStop);
printf("%d\t%d\n", tmStart.tm_hour, tmStop.tm_hour);
}
int main()
{
getPeriod("2016-12-05 18:14:35", "2016-12-05 18:18:34");
return 0;
}
Output:
17 18
Why does this happen?
Compiler gcc (GCC) 6.2.1
OS Linux
tmStart and tmStop are not initialized, so some fields will be uninitialized when passed to mktime. Thus, the behavior is technically undefined.
From the strptime man page (note the first two sentences):
In principle, this function does not initialize tm but only stores the values specified. This means that tm should be initialized before the call. Details differ a bit between different UNIX systems. The glibc implementation does not touch those fields which are not explicitly specified, except that it recomputes the tm_wday and tm_yday field if any of the year, month, or day elements changed.

strftime not giving correct output with %C option - Solaris 10

We are using /usr/xpg4/bin as default path in our profile.
We are printing the output of variable "curr_date" here:
lt = time(NULL);
ltime=localtime(localtime(&lt));
strftime(curr_date,sizeof(curr_date),"%m/%d/%y%C",ltime);
We get the output as "06/27/13Thu Jun 27 02:39:34 PDT" instead of "06/27/1320".
Do you know what should be the format specifiers that should work here?
Thanks
The use of /usr/xpg4/bin in your $PATH only selects the standard compliant commands, it does not change function calls in your programs to use the standards compliant versions.
As described in the Solaris standards(5) man page there are various #defines and compiler flags you need to use to specify compliance for various standards.
For instance, taking your code snippet and expanding it to this standalone test program:
#include <sys/types.h>
#include <time.h>
#include <stdio.h>
int main(int argc, char **argv)
{
time_t lt;
struct tm *ltime;
char curr_date[80];
lt = time(NULL);
ltime = localtime(&lt);
strftime(curr_date, sizeof(curr_date), "%m/%d/%y%C", ltime);
printf("%s\n", curr_date);
return 0;
}
Then compiling with the different flags shows the different behavior:
% cc -o /tmp/strftime /tmp/strftime.c
% /tmp/strftime
06/30/13Sun Jun 30 20:28:00 PDT 2013
% cc -xc99 -D_XOPEN_SOURCE=600 -o /tmp/strftime /tmp/strftime.c
% /tmp/strftime
06/30/1320
The default mode is backwards compatible with the traditional Solaris code, the second form requests compliance with the C99 and XPG6 (Unix03) standards.
Have a good look at the code between call to strftime() and printing curr_date. You're overwriting curr_data somewhere, because the start of what you print is correct. Might also be something fishy with memory management of curr_data; how is it defined, did you allocate memory for curr_data?
Set a breakpoint right after strftime() and you'll see it holds the expected/correct string.

Resources