I have a program which contains a time syncing module.
The module syncs time with a new timezone and a timestamp.
It changes timezone to the new one by setting up /etc/timezone and /etc/localtime at first, and then set system time using the timestamp.
I tried two methods to set the system time:
int set_time(uint64_t ts) {
#if 0
//first method
struct timeval tv;
tv.tv_sec = ts;
tv.tv_usec = 0;
if (settimeofday(&tv, NULL) != 0) {
return -1;
}
#else
//second method
char cmd[256];
snprintf(cmd, sizeof(cmd), "date -s #%lld", ts);
if (system(cmd) != 0) {
return -1;
}
#endif
return 0;
}
Both method doesn't work as I intended.
After call to this function, the system time is changed according to the timestamp and the new timezone, but the date time printed in the program seems still using the old timezone.(I use the api time and localtime_r to get the current date time.)
However, after I restart the program, the date time printed in the program start to become the same to the system time.
What I want is the date time in the program became the same as the system time after I call the time syncing apis.
If you want your libc to re-read time zone information from /etc while your program is running, the simplest way is:
#include <time.h>
#include <stdlib.h>
...
unsetenv("TZ");
tzset();
Explanation (man tzset):
The tzset() function initializes the tzname variable from the TZ
environment variable. ... If the TZ variable does not appear in the
environment, the system timezone is used. The system timezone is
configured by copying, or linking, a file in the tzfile(5) format to
/etc/localtime.
I tried both tzset() and settimeofday(). But it seems tzset() only changes the user space timezone, without affecting the kernel space; while settimeofday() only updates the kernel timezone.
Code to test tzset():
oldtz = getenv("TZ");
putenv((char*)"TZ=UTC0");
tzset();
Code to test settimeofday():
struct timeval tv;
struct timezone tz;
gettimeofday(&tv, &tz);
tz.tz_minuteswest -= 3*60;
settimeofday(&tv, &tz);
Any idea of how to change both the user-space and kernel-space timezone at once?
I have a little code below. I use this code to output some 1s and 0s (unsigned output[38]) from a GPIO of an embedded board.
My Question: the time between two output values (1, 0 or 0, 1) should be 416 microseconds as I define on clock_nanosleep below code, I also used sched_priority() for a better time resolution. However, an oscilloscope (pic below) measurement shows that the time between the two output values are 770 usec . I wonder why do I have that much inaccuracy between the signals?
PS. the board(beagleboard) has Linux 3.2.0-23-omap #36-Ubuntu Tue Apr 10 20:24:21 UTC 2012 armv7l armv7l armv7l GNU/Linux kernel, and it has 750 MHz CPU, top shows almost no CPU(~1%) and memory(~0.5%) is consumed before I run my code. I use an electronic oscilloscope which has no calibration problem.
#include <stdio.h>
#include <stdlib.h> //exit();
#include <sched.h>
#include <time.h>
void msg_send();
struct sched_param sp;
int main(void){
sp.sched_priority = sched_get_priority_max(SCHED_FIFO);
sched_setscheduler(0, SCHED_FIFO, &sp);
msg_send();
return 0;
}
void msg_send(){
unsigned output[38] = {0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,1};
FILE *fp8;
if ((fp8 = fopen("/sys/class/gpio/export", "w")) == NULL){ //echo 139 > export
fprintf(stderr,"Cannot open export file: line844.\n"); fclose(fp8);exit(1);
}
fprintf(fp8, "%d", 139); //pin 3
fclose(fp8);
if ((fp8 = fopen("/sys/class/gpio/gpio139/direction", "rb+")) == NULL){
fprintf(stderr,"Cannot open direction file - GPIO139 - line851.\n");fclose(fp8); exit(1);
}
fprintf(fp8, "out");
fclose(fp8);
if((fp8 = fopen("/sys/class/gpio/gpio139/value", "w")) == NULL) {
fprintf(stderr,"error in openning value\n"); fclose(fp8); exit(1);
}
struct timespec req = { .tv_sec=0, .tv_nsec = 416000 }; //416 usec
/* here is the part that my question focus*/
while(1){
for(i=0;i<38;i++){
rewind(fp8);
fprintf(fp8, "%d", output[i]);
clock_nanosleep(CLOCK_MONOTONIC ,0, &req, NULL);
}
}
}
EDIT: I have been reading for days that clock_nanosleep() or other nanosleep, usleep etc. does not guarantee the waking up on time. they usually provide to sleep the code for the defined time, but waking up the process depends on the CPU. what I found is that absolute time provides a better resolution (TIMER_ABSTIME flag). I found the same solution as Maxime suggests. however, I have a glitch on my signal when for loop is finalized. In my opinion, it is not good to any sleep functions to create a PWM or data output on an embedded platform. It is good to spend some time to learn CPU timers that platforms provide to generate the PWM or data out that has good accuracy.
I can't figure out how a call to clock_getres() can solve your problem. In the man page, it's said that only read the resolution of the clock.
As Geoff said, using absolute sleeping clock should be a better solution. This can avoid the unespected timing delay from other code.
struct timespec Time;
clock_gettime(CLOCK_REALTIME, &(Time));
while(1){
Time.tv_nsec += 416000;
if(Time.tv_nsec > 999999999){
(Time.tv_sec)++;
Time.tv_nsec -= 1000000000;
}
clock_nanosleep(CLOCK_REALTIME, TIMER_ABSTIME, &(Time), NULL);
//Do something
}
I am using this on fews programs I have for generating some regular message on ethernet network. And it's working fine.
If you are doing time sensitive I/O, you probably shouldn't use the stuff in stdio.h but instead the I/O system calls because of the buffering done by stdio. It looks like you might be getting the worst effect of the buffering too because your program does these steps:
fill the buffer
sleep
rewind, which I believe will flush the buffer
What you want is for the kernel to service the write while you are sleeping, instead the buffer is flushed after you sleep and you have to wait for the kernel to process it.
I think your best bet is to use open("/sys/class/gpio/gpio139/value", O_WRONLY|O_DIRECT) to minimize delays due to caching.
if you still need to flush buffers to force the write through you probably want to use clock_gettime to compute the time spent flushing the data and subtract that from the sleep time. Alternatively add the desired interval to the result of clock_gettime and pass that to clock_nanosleep and use the TIMER_ABSTIME flag to wait for that absolute time to occur.
I would guess that the problem is that the clock_nanosleep is sleeping for 416 microsec
and that the other commands in the loop as well as the loop and clock_nanosleep architecture itself are taking 354 microsec. The OS may also be making demands.
What interval do you get if you set the sleep = 0?
Are you running this on a computer or a PLC?
Response to Comment
Seems like you have something somewher in the hardware/software that is doing something unexpected - it could be a bugger to find.
I have 2 suggestions depending on how critical the period is:
Low criticality - put a figure in your program that causes the loop to take the time you want. However, if this is a transient or time/temperature dependant effect you will need to check for drift periodically.
High criticality - Build a temperature stable oscilator in hardware. These can be bought off the shelf.
I run the following C program between two machines with 10GibE; the program reports 12Gib/s whereas nload reports a (more believable) 9.2Gib/s. Can anyone tell me what I'm doing wrong in the program?
.
.
#define BUFFSZ (4*1024)
char buffer[BUFFSZ];
.
.
start = clock();
while (1) {
n = write(sockfd, buffer, BUFFSZ);
if (n < 0)
error("ERROR writing to socket");
if (++blocks % (1024*1024) == 0)
{
blocks = 0;
printf("32Gib at %6.2lf Gib/s\n", 32.0/(((double) (clock() - start)) / CLOCKS_PER_SEC));
start = clock();
}
}
This is CentOs 6.0 on Linux 2.6.32; nload 0.7.3, gcc 4.4.4.
Firstly, clock() returns an estimate of the CPU time used by the program, not the wall-clock time - so your calculation indicates that you are transferring 12GiB per second of CPU time used. Instead, use clock_gettime() with the clock ID CLOCK_MONOTONIC to measure wall-clock time.
Secondly, after write() returns the data hasn't necessarily been sent to the network yet - merely copied into the kernel buffers for sending. This will give you a higher reported transfer rate at the start of the connection.
Check the return value from read() n might be shorter than BUFFSZ.
EDIT: oops, that should have been write().
Here's the sort of time formatting I'm after:
2009-10-08 04:31:33.918700000 -0500
I'm currently using this:
strftime(buf, sizeof(buf), "%Y-%m-%d %H:%M:%S %Z", ts);
Which gives:
2009-10-11 13:42:57 CDT
Which is close, but not exact. I can't seem to find anything on displaying -0500 at the end. Plus I'm getting the seconds as an int.
How can I resolve these two issues?
I came up with this:
char fmt[64], buf[64];
struct timeval tv;
struct tm *tm;
gettimeofday(&tv, NULL);
if((tm = localtime(&tv.tv_sec)) != NULL)
{
strftime(fmt, sizeof fmt, "%Y-%m-%d %H:%M:%S.%%06u %z", tm);
snprintf(buf, sizeof buf, fmt, tv.tv_usec);
printf("'%s'\n", buf);
}
Fixes for the problems you had:
Use gettimeofday(), and struct timeval, which has a microseconds member for the higher precision.
Use a two-step approach, where we first build a string containing all the data except the microseconds.
Use lower-case 'z' for the timezone offset. This seems to be a GNU extension.
I tried re-creating the timezone offset manually, through the second struct timezone * argument of gettimeofday(), but on my machine it returns an offset of 0 which is not correct. The manual page for gettimefday() has quite a lot to say about the handling of timezones under Linux (which is the OS I tested on).
"%Y-%m-%d %T %z", but it seems %z is a GNU extension.
%z (lower case z).
However, this does not appear in the Posix specification. Google took me in a circle back here.