I have a program which contains a time syncing module.
The module syncs time with a new timezone and a timestamp.
It changes timezone to the new one by setting up /etc/timezone and /etc/localtime at first, and then set system time using the timestamp.
I tried two methods to set the system time:
int set_time(uint64_t ts) {
#if 0
//first method
struct timeval tv;
tv.tv_sec = ts;
tv.tv_usec = 0;
if (settimeofday(&tv, NULL) != 0) {
return -1;
}
#else
//second method
char cmd[256];
snprintf(cmd, sizeof(cmd), "date -s #%lld", ts);
if (system(cmd) != 0) {
return -1;
}
#endif
return 0;
}
Both method doesn't work as I intended.
After call to this function, the system time is changed according to the timestamp and the new timezone, but the date time printed in the program seems still using the old timezone.(I use the api time and localtime_r to get the current date time.)
However, after I restart the program, the date time printed in the program start to become the same to the system time.
What I want is the date time in the program became the same as the system time after I call the time syncing apis.
If you want your libc to re-read time zone information from /etc while your program is running, the simplest way is:
#include <time.h>
#include <stdlib.h>
...
unsetenv("TZ");
tzset();
Explanation (man tzset):
The tzset() function initializes the tzname variable from the TZ
environment variable. ... If the TZ variable does not appear in the
environment, the system timezone is used. The system timezone is
configured by copying, or linking, a file in the tzfile(5) format to
/etc/localtime.
Related
I've written the following to print the local time as a string:
char time_buffer[25];
time_t t = time(&t);
strftime(time_buffer, 25, "%c", localtime(&t));
printf("Formatted time: %s\n", time_buffer);
However, the code doesn't make too much sense to me (even though I've written it). A few questions on it:
How does time_t t = time(&t) work? Is this setting up a struct of the localtime? Where is it grabbing the time from?
What is the difference between the time(&t) and localtime(&t)? Why are both needed?
time() is implemented in libc. For example, here is the implementation in glibc 2.33 for Linux:
time (time_t *t)
{
return INLINE_VSYSCALL (time, 1, t);
}
which asks the kernel via a syscall for the time. The kernel, in turn, maintains a variable somewhere with this information. The kernel gets the time on boot from a battery backed clock and often subsequently sync'ed with a reference clock source (on Linux this would be via the Network Time Protocol (NTP)).
time() returns a time_t (on my system that is a typedef of long) which is number of seconds since epoch (this is measure of time is independent of timezone). localtime() returns a struct tm * which has the usual components for of date and time (sec, min, hour, day, month, year etc) in whatever timezone you selected (on Linux via the environment variable TZ).
I am writing a Linux daemon that writes a log. I'd like the log to be rotated by logrotate. The program is written in C.
Normally, my program would open the log file when it starts, then write entries as needed and then, finally, close the log file on exit.
What do I need to do differently in order to support log rotation using logrotate? As far as I have understood, my program should be able to reopen the log file each time logrotate has finished it's work. The sources that I googled didn't, however, specify what reopening the log file exactly means. Do I need to do something about the old file and can I just create another file with the same name? I'd prefer quite specific instructions, like some simple sample code.
I also understood that there should be a way to tell my program when it is time to do the reopening. My program already has a D-Bus interface and I thought of using that for those notifications.
Note: I don't need instructions on how to configure logrotate. This question is only about how to make my own software compatible with it.
There are several common ways:
you use logrotate and your program should be able to catch a signal (usually SIGHUP) as a request to close and reopen its log file. Then logrotate sends the signal in a postrotate script
you use logrotate and your program is not aware of it, but can be restarted. Then logrotate restarts your program in a postrotate script. Cons: if the start of the program is expensive, this may be suboptimal
you use logrotate and your program is not aware of it, but you pass the copytruncate option to logrotate. Then logrotate copies the file and then truncates it. Cons: in race conditions you can lose messages. From rotatelog.conf manpage
... Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost...
you use rotatelogs, an utility for httpd Apache. Instead of writing directly to a file, you programs pipes its logs to rotatelogs. Then rotatelogs manages the different log files. Cons: your program should be able to log to a pipe or you will need to install a named fifo.
But beware, for critical logs, it may be interesting to close the files after each message, because it ensures that everything has reached the disk in case of an application crash.
Although man logrotate examples use the HUP signal, I recommend using USR1 or USR2, as it is common to use HUP for "reload configuration". So, in logrotate configuration file, you'd have for example
/var/log/yourapp/log {
rotate 7
weekly
postrotate
/usr/bin/killall -USR1 yourapp
endscript
}
The tricky bit is to handle the case where the signal arrives in the middle of logging. The fact that none of the locking primitives (other than sem_post(), which does not help here) are async-signal safe makes it an interesting issue.
The easiest way to do it is to use a dedicated thread, waiting in sigwaitinfo(), with the signal blocked in all threads. At exit time, the process sends the signal itself, and joins the dedicated thread. For example,
#define ROTATE_SIGNAL SIGUSR1
static pthread_t log_thread;
static pthread_mutex_t log_lock = PTHREAD_MUTEX_INITIALIZER;
static char *log_path = NULL;
static FILE *volatile log_file = NULL;
int log(const char *format, ...)
{
va_list args;
int retval;
if (!format)
return -1;
if (!*format)
return 0;
va_start(args, format);
pthread_mutex_lock(&log_lock);
if (!log_file)
return -1;
retval = vfprintf(log_file, format, args);
pthread_mutex_unlock(&log_lock);
va_end(args);
return retval;
}
void *log_sighandler(void *unused)
{
siginfo_t info;
sigset_t sigs;
int signum;
sigemptyset(&sigs);
sigaddset(&sigs, ROTATE_SIGNAL);
while (1) {
signum = sigwaitinfo(&sigs, &info);
if (signum != ROTATE_SIGNAL)
continue;
/* Sent by this process itself, for exiting? */
if (info.si_pid == getpid())
break;
pthread_mutex_lock(&log_lock);
if (log_file) {
fflush(log_file);
fclose(log_file);
log_file = NULL;
}
if (log_path) {
log_file = fopen(log_path, "a");
}
pthread_mutex_unlock(&log_lock);
}
/* Close time. */
pthread_mutex_lock(&log_lock);
if (log_file) {
fflush(log_file);
fclose(log_file);
log_file = NULL;
}
pthread_mutex_unlock(&log_lock);
return NULL;
}
/* Initialize logging to the specified path.
Returns 0 if successful, errno otherwise. */
int log_init(const char *path)
{
sigset_t sigs;
pthread_attr_t attrs;
int retval;
/* Block the rotate signal in all threads. */
sigemptyset(&sigs);
sigaddset(&sigs, ROTATE_SIGNAL);
pthread_sigmask(SIG_BLOCK, &sigs, NULL);
/* Open the log file. Since this is in the main thread,
before the rotate signal thread, no need to use log_lock. */
if (log_file) {
/* You're using this wrong. */
fflush(log_file);
fclose(log_file);
}
log_file = fopen(path, "a");
if (!log_file)
return errno;
log_path = strdup(path);
/* Create a thread to handle the rotate signal, with a tiny stack. */
pthread_attr_init(&attrs);
pthread_attr_setstacksize(65536);
retval = pthread_create(&log_thread, &attrs, log_sighandler, NULL);
pthread_attr_destroy(&attrs);
if (retval)
return errno = retval;
return 0;
}
void log_done(void)
{
pthread_kill(log_thread, ROTATE_SIGNAL);
pthread_join(log_thread, NULL);
free(log_path);
log_path = NULL;
}
The idea is that in main(), before logging or creating any other threads, you call log_init(path-to-log-file), noting that a copy of the log file path is saved. It sets up the signal mask (inherited by any threads you might create), and creates the helper thread. Before exiting, you call log_done(). To log something to the log file, use log() like you would use printf().
I'd personally also add a timestamp before the vfprintf() line, automatically:
struct timespec ts;
struct tm tm;
if (clock_gettime(CLOCK_REALTIME, &ts) == 0 &&
localtime_r(&(ts.tv_sec), &tm) == &tm)
fprintf(log_file, "%04d-%02d-%02d %02d:%02d:%02d.%03ld: ",
tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday,
tm.tm_hour, tm.tm_min, tm.tm_sec,
ts.tv_nsec / 1000000L);
This YYYY-MM-DD HH:MM:SS.sss format has the nice benefit that it is close to a worldwide standard (ISO 8601) and sorts in the correct order.
Normally, my program would open the log file when it starts, then
write entries as needed and then, finally, close the log file on exit.
What do I need to do differently in order to support log rotation
using logrotate?
No, your program should work as if it doesn't know anything about logrotate.
Do I need to do something about the old file and can I just create another file with the same name?
No. There should be only one log file to be opened and be written. Logrotate will check that file and if it becomes too large, it does copy/save the old part, and truncate the current log file. Therefore, your program should work completely transparent - it doesn't need to know anything about logrotate.
I tried both tzset() and settimeofday(). But it seems tzset() only changes the user space timezone, without affecting the kernel space; while settimeofday() only updates the kernel timezone.
Code to test tzset():
oldtz = getenv("TZ");
putenv((char*)"TZ=UTC0");
tzset();
Code to test settimeofday():
struct timeval tv;
struct timezone tz;
gettimeofday(&tv, &tz);
tz.tz_minuteswest -= 3*60;
settimeofday(&tv, &tz);
Any idea of how to change both the user-space and kernel-space timezone at once?
I have a little code below. I use this code to output some 1s and 0s (unsigned output[38]) from a GPIO of an embedded board.
My Question: the time between two output values (1, 0 or 0, 1) should be 416 microseconds as I define on clock_nanosleep below code, I also used sched_priority() for a better time resolution. However, an oscilloscope (pic below) measurement shows that the time between the two output values are 770 usec . I wonder why do I have that much inaccuracy between the signals?
PS. the board(beagleboard) has Linux 3.2.0-23-omap #36-Ubuntu Tue Apr 10 20:24:21 UTC 2012 armv7l armv7l armv7l GNU/Linux kernel, and it has 750 MHz CPU, top shows almost no CPU(~1%) and memory(~0.5%) is consumed before I run my code. I use an electronic oscilloscope which has no calibration problem.
#include <stdio.h>
#include <stdlib.h> //exit();
#include <sched.h>
#include <time.h>
void msg_send();
struct sched_param sp;
int main(void){
sp.sched_priority = sched_get_priority_max(SCHED_FIFO);
sched_setscheduler(0, SCHED_FIFO, &sp);
msg_send();
return 0;
}
void msg_send(){
unsigned output[38] = {0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,1};
FILE *fp8;
if ((fp8 = fopen("/sys/class/gpio/export", "w")) == NULL){ //echo 139 > export
fprintf(stderr,"Cannot open export file: line844.\n"); fclose(fp8);exit(1);
}
fprintf(fp8, "%d", 139); //pin 3
fclose(fp8);
if ((fp8 = fopen("/sys/class/gpio/gpio139/direction", "rb+")) == NULL){
fprintf(stderr,"Cannot open direction file - GPIO139 - line851.\n");fclose(fp8); exit(1);
}
fprintf(fp8, "out");
fclose(fp8);
if((fp8 = fopen("/sys/class/gpio/gpio139/value", "w")) == NULL) {
fprintf(stderr,"error in openning value\n"); fclose(fp8); exit(1);
}
struct timespec req = { .tv_sec=0, .tv_nsec = 416000 }; //416 usec
/* here is the part that my question focus*/
while(1){
for(i=0;i<38;i++){
rewind(fp8);
fprintf(fp8, "%d", output[i]);
clock_nanosleep(CLOCK_MONOTONIC ,0, &req, NULL);
}
}
}
EDIT: I have been reading for days that clock_nanosleep() or other nanosleep, usleep etc. does not guarantee the waking up on time. they usually provide to sleep the code for the defined time, but waking up the process depends on the CPU. what I found is that absolute time provides a better resolution (TIMER_ABSTIME flag). I found the same solution as Maxime suggests. however, I have a glitch on my signal when for loop is finalized. In my opinion, it is not good to any sleep functions to create a PWM or data output on an embedded platform. It is good to spend some time to learn CPU timers that platforms provide to generate the PWM or data out that has good accuracy.
I can't figure out how a call to clock_getres() can solve your problem. In the man page, it's said that only read the resolution of the clock.
As Geoff said, using absolute sleeping clock should be a better solution. This can avoid the unespected timing delay from other code.
struct timespec Time;
clock_gettime(CLOCK_REALTIME, &(Time));
while(1){
Time.tv_nsec += 416000;
if(Time.tv_nsec > 999999999){
(Time.tv_sec)++;
Time.tv_nsec -= 1000000000;
}
clock_nanosleep(CLOCK_REALTIME, TIMER_ABSTIME, &(Time), NULL);
//Do something
}
I am using this on fews programs I have for generating some regular message on ethernet network. And it's working fine.
If you are doing time sensitive I/O, you probably shouldn't use the stuff in stdio.h but instead the I/O system calls because of the buffering done by stdio. It looks like you might be getting the worst effect of the buffering too because your program does these steps:
fill the buffer
sleep
rewind, which I believe will flush the buffer
What you want is for the kernel to service the write while you are sleeping, instead the buffer is flushed after you sleep and you have to wait for the kernel to process it.
I think your best bet is to use open("/sys/class/gpio/gpio139/value", O_WRONLY|O_DIRECT) to minimize delays due to caching.
if you still need to flush buffers to force the write through you probably want to use clock_gettime to compute the time spent flushing the data and subtract that from the sleep time. Alternatively add the desired interval to the result of clock_gettime and pass that to clock_nanosleep and use the TIMER_ABSTIME flag to wait for that absolute time to occur.
I would guess that the problem is that the clock_nanosleep is sleeping for 416 microsec
and that the other commands in the loop as well as the loop and clock_nanosleep architecture itself are taking 354 microsec. The OS may also be making demands.
What interval do you get if you set the sleep = 0?
Are you running this on a computer or a PLC?
Response to Comment
Seems like you have something somewher in the hardware/software that is doing something unexpected - it could be a bugger to find.
I have 2 suggestions depending on how critical the period is:
Low criticality - put a figure in your program that causes the loop to take the time you want. However, if this is a transient or time/temperature dependant effect you will need to check for drift periodically.
High criticality - Build a temperature stable oscilator in hardware. These can be bought off the shelf.
My scenario, I'm collecting network packets and if packets match a network filter I want to record the time difference between consecutive packets, this last part is the part that doesn't work. My problem is that I cant get accurate sub-second measurements no matter what C timer function I use. I've tried: gettimeofday(), clock_gettime(), and clock().
I'm looking for assistance to figure out why my timing code isn't working properly.
I'm running on a cygwin environment.
Compile Options: gcc -Wall capture.c -o capture -lwpcap -lrt
Code snippet :
/*globals*/
int first_time = 0;
struct timespec start, end;
double sec_diff = 0;
main() {
pcap_t *adhandle;
const struct pcap_pkthdr header;
const u_char *packet;
int sockfd = socket(PF_INET, SOCK_STREAM, 0);
.... (previous I create socket/connect - works fine)
save_attr = tty_set_raw();
while (1) {
packet = pcap_next(adhandle, &header); // Receive a packet? Process it
if (packet != NULL) {
got_packet(&header, packet, adhandle);
}
if (linux_kbhit()) { // User types message to channel
kb_char = linux_getch(); // Get user-supplied character
if (kb_char == 0x03) // Stop loop (exit channel) if user hits Ctrl+C
break;
}
}
tty_restore(save_attr);
close(sockfd);
pcap_close(adhandle);
printf("\nCapture complete.\n");
}
In got_packet:
got_packet(const struct pcap_pkthdr *header, const u_char *packet, pcap_t * p){ ... {
....do some packet filtering to only handle my packets, set match = 1
if (match == 1) {
if (first_time == 0) {
clock_gettime( CLOCK_MONOTONIC, &start );
first_time++;
}
else {
clock_gettime( CLOCK_MONOTONIC, &end );
sec_diff = (end.tv_sec - start.tv_sec) + ((end.tv_nsec - start.tv_nsec)/1000000000.0); // Packet difference in seconds
printf("sec_diff: %ld,\tstart_nsec: %ld,\tend_nsec: %ld\n", (end.tv_sec - start.tv_sec), start.tv_nsec, end.tv_nsec);
printf("sec_diffcalc: %ld,\tstart_sec: %ld,\tend_sec: %ld\n", sec_diff, start.tv_sec, end.tv_sec);
start = end; // Set the current to the start for next match
}
}
}
I record all packets with Wireshark to compare, so I expect the difference in my timer to be the same as Wireshark's, however that is never the case. My output for tv_sec will be correct, however tv_nsec is not even close. Say there is a 0.5 second difference in wireshark, my timer will say there is a 1.999989728 second difference.
Basically, you will want to use a timer with a higher resolution
Also, I did not check in libpcap, but I am pretty sure that libpcap can give you the time at which each packet was received. In which case, it will be closest that you can get to what Wireshark displays.
I don't think that it is the clocks that are your problem, but the way that you are waiting on new data. You should use a polling function to see when you have new data from either the socket or from the keyboard. This will allow your program to sleep when there is no new data for it to process. This is likely to make the operating system be nicer to your program when it does have data to process and schedule it quicker. This also allows you to quit the program without having to wait for the next packet to come in. Alternately you could attempt to run your program at really high or real time priority.
You should consider getting the current time at the first instance after you get a packet if the filtering can take very long. You may also want to consider multiple threads for this program if you are trying to capture data on a fast and busy network. Especially if you have more than one processor, but since you are doing some pritnfs which may block. I noticed you had a function to set a tty to raw mode, which I assume is the standard output tty. If you are actually using a serial terminal that could slow things down a lot, but standard out to a xterm can also be slow. You may want to consider setting stdout to fully buffered rather than line buffered. This should speed up the output. (man setvbuf)