CLOCK_REALTIME nanosecond precision support in kernel - c

I wrote a simple program to determine if i can get nanosecond precision on my system, which is a RHEL 5.5 VM (kernel 2.6.18-194).
// cc -g -Wall ntime.c -o ntime -lrt
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
struct timespec spec;
printf("CLOCK_REALTIME - \"Systemwide realtime clock.\":\n");
clock_getres(CLOCK_REALTIME, &spec);
printf("\tprecision: %ldns\n", spec.tv_nsec);
clock_gettime(CLOCK_REALTIME, &spec);
printf("\tvalue : %010ld.%-ld\n", spec.tv_sec, spec.tv_nsec);
printf("CLOCK_MONOTONIC - \"Represents monotonic time. Cannot be set.\":\n");
clock_getres(CLOCK_MONOTONIC, &spec);
printf("\tprecision: %ldns\n", spec.tv_nsec);
clock_gettime(CLOCK_MONOTONIC, &spec);
printf("\tvalue : %010ld.%-ld\n", spec.tv_sec, spec.tv_nsec);
return 0;
}
A sample output:
CLOCK_REALTIME - "Systemwide realtime clock.":
precision: 999848ns
value : 1504781052.328111000
CLOCK_MONOTONIC - "Represents monotonic time. Cannot be set.":
precision: 999848ns
value : 0026159205.299686941
So REALTIME gives me the local time and MONOTONIC the system's uptime. Both clocks seem to have a μs precision (999848ns ≅ 1ms), even though MONOTONIC outputs in nanoseconds, which is confusing.
man clock_gettime states:
CLOCK_REALTIME_HR
High resolution version of CLOCK_REALTIME.
However, grep -R CLOCK_REALTIME_HR /usr/include/ | wc -l returns 0 and trying to compile results in error: ‘CLOCK_REALTIME_HR’ undeclared (first use in this function).
I was trying to determine if i could get the local time in nanosecond precision, but either my code has a bug or maybe this feature isn't entirely supported in 5.5 (or the VM's HPET is off, or something else).
Can i get local time in nanoseconds in this system? What am i doing wrong?
EDIT
Well the answer seems to be No.
While nanosecond precision can be achieved, the system doesn't guarantee nanosecond accuracy in this scenario (here's a clear answer on the difference rather than a rant). Typical COTS hardware doesn't really handle it (another answer in the right direction).
I'm still curious as to why do the clocks report the same clock_getres resolution yet MONOTONIC yields what seems to be nanosecond values while REALTIME yields microseconds.

RHEL5 is really ancient at this point, you should consider upgrading. On a newer system (Ubuntu 16.04) your program produces:
CLOCK_REALTIME - "Systemwide realtime clock.":
precision: 1ns
value : 1504783164.686220185
CLOCK_MONOTONIC - "Represents monotonic time. Cannot be set.":
precision: 1ns
value : 0000537257.257923964

Related

Compiling old C code Y2038 conform still results in 4 byte variables

According to this overview in order to compile Y2038 conform old code, we just need to add the preprocessor macro __USE_TIME_BITS64 to gcc, but that does not seem to work on an ARMv7 board with Debian 12 (bookworm):
#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <unistd.h>
int main(void)
{
struct stat sb;
printf("sizeof time_t: %zu\n", sizeof(time_t));
printf("sizeof stat timestamp: %zu\n", sizeof(sb.st_atime));
return 0;
}
time_t is still 4 bytes:
root#debian:~# gcc -D__USE_TIME_BITS64 time.c -o time
root#debian:~# ./time
sizeof time_t: 4
sizeof stat timestamp: 4
root#debian:~#
glibc is 2.33, what am I doing wrong here?
According to this post (which is getting a little old now, and some parts of which are probably no longer relevant):
... defining _TIME_BITS=64 would cause all time functions to use 64-bit times by default. The _TIME_BITS=64 option is implemented by transparently mapping the standard functions and types to their internal 64-bit variants. Glibc would also set __USE_TIME_BITS64, which user code can test for to determine if the 64-bit variants are available.
Presumably, this includes making time_t 64 bit.
So if your version of glibc supports this at all, it looks like you're setting the wrong macro. You want:
-D_TIME_BITS=64

How to get the timezone in C99?

I compile C using the C99 version, and I want to try and output the timezone of the given time.
The IDE I use gives GMT+0 as the timezone, but I want to somehow output it with struct tm.
So I followed the instructions from this answer and made this program:
#include <stdio.h>
#include <time.h>
int main()
{
time_t present = time(NULL);
struct tm now = *localtime(&present);
now.tm_mon += 1;
now.tm_year += 1900;
struct tm t = {0};
localtime_r(&present, &t);
printf("%i/%i/%i %i:%i:%i from %s\n", now.tm_mon, now.tm_mday, now.tm_year, now.tm_hour, now.tm_min, now.tm_sec, t.tm_zone);
}
And it seems like I got 2 errors here:
implicit declaration of function 'localtime_r' is invalid in C99
no member named 'tm_zone' in 'struct tm'
So I checked the IDE Manual, and find that localtime_r actually exists, and is part of the <time.h> library.
So now I'm wondering if the IDE's confused or something. I don't know how to fix it either.
This might get closed as it might "need debugging details", but read more.
Because of this whole situation, how can I get the timezone (maybe even the offset) in C99 and get it to be outputted with printf()?
First, localtime_r is not part of the standard library - it's an extension offered by some implementations, and by default its declaration is not exposed in those implementations. To make it available, you'll have to define the macro _POSIX_SOURCE before including time.h to make it available. An easy way to do that is on the command line, like so:
gcc -o tz -D_POSIX_SOURCE -std=c11 -pedantic -Wall -Werror tz.c
otherwise, just define it in your source before including time.h:
#define _POSIX_SOURCE
#include <stdio.h>
#include <time.h>
Secondly, if all you're interested in is the local time zone then there's an easier way to do this - get the current time:
time_t t = time( NULL );
then use both localtime and gmtime to get the broken down time for the current time zone and UTC:
struct tm *local = localtime( &t );
struct tm *zulu = gmtime( &t );
Then compute the difference between the tm_hour members of local and zulu, and that's your time zone.
int tz = zulu->tm_hour - local->tm_hour;
You'll want to check local->tm_isdst to account for daylight savings, but that should at least get you started.

CLOCK_MONOTONIC_RAW resolution differs from CLOCK_MONOTONIC and seems way too high

So I have just tested the available clocks on an embedded system with a 2.6.31 kernel.
Some simple test code:
#include <stdio.h>
#include <time.h>
int main(int argc, const char* argv[]) {
struct timespec clock_resolution;
printf("This system uses a timespec with %ldB in tv_sec and %ldB in tv_nsec.\n", (long)sizeof(clock_resolution.tv_sec), (long)sizeof(clock_resolution.tv_nsec));
printf("An int is %ldB on this system, a long int is %ldB.\n", (long)sizeof(int), (long)sizeof(long));
if (clock_getres(CLOCK_MONOTONIC, &clock_resolution))
perror("Can't get CLOCK_MONOTONIC resolution time");
printf("CLOCK_MONOTONIC has precision of %lds and %ldns on this system.\n", (long)clock_resolution.tv_sec, (long)clock_resolution.tv_nsec);
if (clock_getres(CLOCK_MONOTONIC_RAW, &clock_resolution))
perror("Can't get CLOCK_MONOTONIC_RAW resolution time");
printf("CLOCK_MONOTONIC_RAW has precision of %lds and %ldns on this system.\n", (long)clock_resolution.tv_sec, (long)clock_resolution.tv_nsec);
printf("Casted to unsigned this is %lus and %luns.\n", (unsigned long)clock_resolution.tv_sec, (unsigned long)clock_resolution.tv_nsec);
return 0;
}
On an Ubuntu 20.04 (5.4.0-52) VM on an x86 host it results in:
This system uses a timespec with 8B in tv_sec and 8B in tv_nsec.
An int is 4B on this system, a long int is 8B.
CLOCK_MONOTONIC has precision of 0s and 1ns on this system.
CLOCK_MONOTONIC_RAW has precision of 0s and 1ns on this system.
Casted to unsigned this is 0s and 1ns.
On the ARM based NXP i.MX257 controller it results in:
This system uses a timespec with 4B in tv_sec and 4B in tv_nsec.
An int is 4B on this system, a long int is 4B.
CLOCK_MONOTONIC has precision of 0s and 1ns on this system.
CLOCK_MONOTONIC_RAW has precision of 0s and -1070597342ns on this system.
Casted to unsigned this is 0s and 3224369954ns.
This seems somewhat off to me!? As an unsigned value that ns-resolution is 3224369954ns, so over 3s.
Edit to clarify some things:
Error checks on the clock_getres() calls don't trigger.
Controller is a NXP i.MX257.
For the ARM target: gcc 6.5, Kernel 2.6.31, uClibc-ng 1.0.30 (based on a buildroot environment)

MinGW localtime_r works in one time zone, fails in another

I have the following file test.c:
#define _POSIX_THREAD_SAFE_FUNCTIONS
#include <time.h>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <inttypes.h>
int main(int argc,char**argv) {
struct tm t1, t2, t3;
time_t w1, w2, w3;
memset(&t1,0,sizeof(struct tm));
memset(&t2,0,sizeof(struct tm));
memset(&t3,0,sizeof(struct tm));
w1 = 0;
errno = 0;
localtime_r(&w1,&t1);
printf("localtime_r: errno=%d\n",errno);
errno = 0;
w2 = mktime(&t1);
printf("mktime: errno=%d result=%" PRId64 "\n",errno,((int64_t)w2));
errno = 0;
localtime_r(&w2,&t2);
printf("localtime_r: errno=%d\n",errno);
errno = 0;
w3 = mktime(&t2);
printf("mktime: errno=%d result=%" PRId64 "\n",errno,((int64_t)w3));
errno = 0;
localtime_r(&w3,&t3);
printf("localtime_r: errno=%d\n",errno);
printf("sizeof(time_t)=%" PRId64 "\n", ((int64_t)sizeof(time_t)));
printf("W1=%" PRId64 " W2=%" PRId64 " W3=%" PRId64 "\n",((int64_t)w1),((int64_t)w2),((int64_t)w3));
printf("Y1=%d Y2=%d Y3=%d\n",t1.tm_year,t2.tm_year,t3.tm_year);
return 0;
}
I compile it like this:
i686-w64-mingw32-gcc -D__MINGW_USE_VC2005_COMPAT=1 -o test.exe test.c
Note, i686-w64-mingw32-gcc --version reports 8.3-win32 20190406
This is running in a Docker image of Ubuntu 19.04, using the MinGW version
that comes with Ubuntu 19.04 (it says version 6.0.0-3).
I have a Windows 10 VM (Version 1809 OS Build 17763.379).
By default, time zone is set to US Pacific Time (UTC-8).
I copy test.exe to this VM and run it there.
It prints:
localtime_r: errno=0
mktime: errno=0 result=0
localtime_r: errno=0
mktime: errno=0 result=0
localtime_r: errno=0
sizeof(time_t)=8
W1=0 W2=0 W3=0
Y1=69 Y2=69 Y3=69
That's the expected result. (At UTC midnight on 1 Jan 1970, it was still 1969 in UTC-8.)
I change the Windows time zone to UTC+10 (Canberra, Melbourne, Sydney).
Run it again. It prints:
localtime_r: errno=0
mktime: errno=0 result=47244640256
localtime_r: errno=22
mktime: errno=22 result=4294967295
localtime_r: errno=0
sizeof(time_t)=8
W1=0 W2=47244640256 W3=4294967295
Y1=70 Y2=-1 Y3=206
It seems the mktime() call is returning an invalid value in UTC+10 time zone, but returns the correct value of 0 in UTC-8 time zone.
Why does this code work in one timezone break in another?
Note, this is only a problem with -D__MINGW_USE_VC2005_COMPAT=1 to enable
64-bit time_t. If I leave that out, which means 32-bit time_t, then the code
works in both timezones. (But, 32-bit time_t is not a good idea, because it breaks in the year 2038, and that's less than twenty years away now.)
I worked out the cause of the problem. Sander De Dycker's suggestion, that mktime is returning a 32-bit value, is correct.
The problem is basically this: the MSVCRT defines three mktime functions: _mktime32 for 32-bit time_t, _mktime64 for 64-bit time_t, and _mktime which is a legacy alias for _mktime32.
_mingw.h does a #define _USE_32BIT_TIME_T in 32-bit code unless you #define __MINGW_USE_VC2005_COMPAT to disable that. Once you have #define __MINGW_USE_VC2005_COMPAT, then localtime_s is defined as an inline function which calls _localtime64_s. And #define _POSIX_THREAD_SAFE_FUNCTIONS defines localtime_r as an inline function which calls localtime_s. However, mktime is still 32-bit. To get 64-bit mktime, you need to also #define __MSVCRT_VERSION__ 0x1400 (or higher). Once you do that, mktime becomes an inline function which calls _mktime64. Before that, mktime is a normal function declaration which is linked to the legacy 32-bit mktime.
So #define __MINGW_USE_VC2005_COMPAT 1 without #define __MSVCRT_VERSION__ 0x1400 (or -D equivalent) gives you a localtime_r with 64-bit time_t, but a mktime with 32-bit time_t, which obviously won't work. Even worse than that, the actual implementation of the mktime symbol is returning a 32-bit time_t, but the function declaration is for a 64-bit time_t, which is what causes the junk in the upper 32-bits.
As to the difference behaviour in different time zones, I don't have a complete explanation for that, but I think the reason is likely as follows: when you have a function which actually returns a 32-bit value but is incorrectly being defined to return a 64-bit value, the upper 32-bits of the return value will hold random junk data left over from previous calculations. So, any difference in the previous calculations, or slightly different code paths, may result in different random junk. With a UTC-8 timezone, for whatever reason, the random junk is coincidentally zero, so the code (despite its incorrectness) actually works. With a UTC+10 timezone, the random junk turns out to be non-zero, which causes the rest of the code to stop working.

Minix current time

How to write current time in printf on Minix 3.2.1?
I try to use gmtime like below but it gives error on time(&nowtime).
#include <sys/time.h>
#include <time.h>
struct tm *now;
time_t nowtime;
time(&nowtime);
now=gmtime(&nowtime);
printf("TIME is NOW %s",now);
Moreover, I try to recall that in kernel (/usr/src/kernel/main.c) because I need that time on the booting of minix to say when the kernel process is finished and switch to user.
I take some errors on above code like when rebuild the kernel like below;
Not that familiar with minix, but it is similar to Unix & Linux, so maybe something from that platform may be present on minix... so A couple of approaches
Run a man on ctime
the man page on Linux's time() command contains this example code (which you may have to modify for minix, but it shows how to use asctime() localtime() and time() ):
#include <stdio.h>
#include <time.h>
int main(void)
{
time_t result;
result = time(NULL);
printf("%s%ju secs since the Epoch\n",
asctime(localtime(&result)),
(uintmax_t)result);
return(0);
}

Resources