busybox ntpd does not resync date/time after changing it - ntp

I'm trying to figure out how ntpd (from busybox) works.
I'm running the following scenario, for a test sake:
set up date/time, using date -s, to any arbitrary date/time (e.g. 2000-01-01 00:00:00);
run the command ntpd -N -p <server_address> to start the daemon. Just after that, the date/time is successfully sync;
change the date/time againt, using date -s, to the same used in the 1st step (i.e. 2000-01-01 00:00:00);
After that, I have been expecting that date/time was synchronized again, but this doesn't occur, even if I wait for a couple of hours.
My question is: my comprehension about the ntpd's behavior is correct? Should the date/time be resync automatically after the 3rd step? If not, what should I do to resync the date/time?

I would check internaly in the trimmed busybox implementation if the use case is actually covered. Some options could be actually ignored and that can cause confusion.
If not, in case it is a yocto based embedded system, you should consider bring the actual and complete ntpd instead of the busybox one.

Related

How to avoid running Snakemake rule after input or intermediary output file was updated

Even if the output files of a Snakemake build already exist, Snakemake wants to rerun my entire pipeline only because I have modified one of the first input or intermediary output files.
I figured this out by doing a Snakemake dry run with -n which gave the following report for updated input file:
Reason: Updated input files: input-data.csv
and this message for update intermediary files
reason: Input files updated by another job: intermediary-output.csv
How can I force Snakemake to ignore the file update?
You can use the option --touch to mark them up to date:
--touch, -t
Touch output files (mark them up to date without
really changing them) instead of running their
commands. This is used to pretend that the rules were
executed, in order to fool future invocations of
snakemake. Fails if a file does not yet exist.
Beware that this will touch all your files and thus modify the timestamps to put them back in order.
In addition to Eric's answer, see also the ancient flag to ignore timestamps on input files.
Also note that the Unix command touch can be used to modify the timestamp of an existing file and make it appear older than it actually is:
touch --date='2004-12-31 12:00:00' foo.txt
ls -l foo.txt
-rw-rw-r-- 1 db291g db291g 0 Dec 31 2004 foo.txt
In case --touch (with --force, --forceall or --forcerun as the official documentation says that needs to be used in order to force the "touch" if doesn't work by itself) didn't work out as expected, ancient is not an option or it would need to modify too much from the workflow file, or you faced https://github.com/snakemake/snakemake/issues/823 (that's what happened to me when I tried --force and --force*), here is what I did to solve this solution:
I noticed that there were jobs that shouldn't be running since I put files in the expected paths.
I identified the input and output files of the rules that I didn't want to run.
In the order of the rules that were being executed and I didn't want to, I executed touch on the input files and, after, on the output files (taking into account the order of the rules!).
That's it. Since now the timestamp is updated according the rules order and according the input and output files, snakemake will not detect any "updated" files.
This is the manual method, and I think is the last option if the methods mentioned by the rest of people don't work or they are not an option somehow.

How to display top largest files in a non blocking manner on linux?

For years I have being using variasons of du command below in order to produce a report of the largest files from specific location, and most of the time it worked well.
du -L -ch /var/log | sort -rh | head -n 10 &> log-size.txt
This this proved to get stuck in several cases, in a way that prevented stopping it with even the timeout -s KILL 5m ... approach.
Few years back this was caused by stalled NFS mounts but more recently I got into this in on VMs where I didn't use NFS at all. Apparently there is a ~1:30 chance to get this on openstack builds.
I read that following symbolic links (-L) can block "du" in some cases if there are loops but my tests failed to reproduce the problem, even when I created some loop.
I cannot avoid following the symlinks because that's how the files are organized.
What would be safer alternative to generate this report, one that would not block or at least if it does, it can be constrainted to a maximum running duration. It is essential to limit the execution of this command to a number of minutes -- if I can also get a partial result on timeouts or some debuggin info even better.
If you don't care about sparse files and can make do with apparent size (and not the on-disk size), then ls should work just fine: ls -L --sort=s|head -n10> log-size.txt

How can I set file creation times in ZFS?

I've just got a NAS running ZFS and I'd like to preserve creation times when transferring files into it. Both linux/ext4 (where the data is now) and zfs store creation time or birth time. In the case of zfs it's even reported by the stat command. But I haven't been able to figure out how I can set the creation time of a file so it mirrors the creation time in the original file system. Unlike an ext4->ext4 transfer where I can feed debugfs a script to set the file creation times.
Is there a tool similar to debugfs for ZFS?
PS. To explain better:
I have a USB drive attached to a Ubuntu 14.04 laptop. It holds a file system where I care about the creation date (birth date) of the individual files. I consult these creation timestamps often using a script based on debugfs, which reports it as crtime.
I want to move the data to a NAS box running ZFS, but the methods I know (scp -p -r, rsync -a, and tar, among others I've tried) preserve the modification time but not the creation time.
If I were moving to another ext4 file system I would solve the problem using the fantastic tool debugfs. Specifically I can make a list of (filename, crtime) pairs on the source fs (file system), then use debugfs -w on the target fs to read a script with lines of the form
set_inode_field filename crtime <value>
I've tested this and it works just fine.
But my target fs is not ext4 but ZFS and although debugfs runs on the target machine, it is entirely useless there. It doesn't even recognize the fs. Another debug tool that lets you alter timestamps by editing an inode directly is fsdb; it too runs on the target machine, but again I can't seem to get it to recognize a ZFS file system.
I'm told by the folks who sold me the NAS box that debugfs and fsdb are not meant for ZFS filesystems, but they haven't been able to come up with an equivalent. So, after much googling and trying out things I finally decided to post a question here today, hoping someone might have the answer.
I'm surprised at how hard this is turning out to be. The question of how to replicate a dataset so all timestamps are identical seems quite natural from an archival point of view.
Indeed, neither fsdb nor debugfs are likely to be suitable for use with ZFS. What you might need to do instead is find an archive format that will the preserve crtime field that presumably is already set for the files on your fileserver. If there is a version of pax or another archiving tool for your system it may be able to do this (cf. the -pe "preserve everything" flag for pax which it seems in current versions does not preserve "everything" - viz. it does not preserve crtime/birth_time). You will likely have more success finding an archiving application that is "crtime aware" than trying set the creation times by hacking on the ZFS based FreeBSD system with what are likely to be rudimentary tools.
You may be able to find more advanced tools on OpenSolaris based systems like Illumos or SmartOS (e.g. mdb). Whether it would be possible to transfer your data to a ZFS dataset on one of those platforms and then, combining the tools they have with, say, dtrace in order to rewrite the crtime fields is more of a theoretical question. If it worked then you could export the pool and its datasets to FreeBSD - exporting a pool does seem to preserve the crtime time stamps. If you are able to preserve crtime while dumping your ext4 filesystem to a ZFSonLinux dataset on the same host (nb: I have not tested this) you could then use zfs send to transfer the whole filesystem to your NAS.
This core utils bug report may shed some light on the state of user and operating system level tools on Linux. Arguably the filesystem level crtime field of an inode should be difficult to change. While ZFS on FreeBSD "supports" crtime, the state of low level filesystem debugging tools on FreeBSD might not have kept pace in earlier releases (c.f. the zdb manual page). Are you sure you want to "set" (or reset) inode creation times? Or do you want to preserve them after they have been set on a system that already supports them?
On a FreeBSD system if you stat a file stored on a ZFS dataset you will often notice that the crtime field of the file is set to the same time as the ctime field. This is likely because the application that wrote the file did not have access to library and kernel functions required to set crtime at the time the file was "born" and its inode entries were created. There are examples of applications / libraries that try to preserve crtime at the application level such as libarchive(3) (see also: archive_entry_atime(3)) and gracefully handle inode creation if the archive is restored on a filesystem that does not support the crtime field. But that might not be relevant in your case.
As you might imagine, there are a lot of applications that write files to filesystems ... especially with Unix/POSIX systems where "everything is a file". I'm not sure if older applications would need to be modified or recompiled to support those fields, or whether they would pick them up transparently from the host system's C libraries. Applications being used on older FreeBSD releases or on a Linux system without ext4 could be made to run in compatibility mode on an up to date OS, for example, but whether they would properly handle the time fields is a good question.
For me running this little script as sh birthtime_test confirms that file creation times are "turned on" on my FreeBSD systems (all of which use ZFS post v28 i.e. with feature flags):
#!/bin/sh
#birthtime_test
uname -r
if [ -f new_born ] ; then rm -f new_born ; fi
touch new_born
sleep 3
touch -a new_born
sleep 3
echo "Hello from new_born at:" >> new_born
echo `date` >> new_born
sleep 3
chmod o+w new_born
stat -f "Name:%t%N
Born:%t%SB
Access:%t%Sa
Modify:%t%Sm
Change:%t%Sc" new_born
cat new_born
Output:
9.2-RELEASE-p10
Name: new_born
Born: May 7 12:38:35 2015
Access: May 7 12:38:38 2015
Modify: May 7 12:38:41 2015
Change: May 7 12:38:44 2015
Hello from new_born at:
Thu May 7 12:38:41 EDT 2015
(NB: the chmod operation "changes" but does not "modify" the file contents - this is what the echo command does by adding content to the file. See the touch manual page for explanations of the -m and -a flags).
This is the oldest FreeBSD release I have access to right now. I'd be curious to know how far back in the release cycle FreeBSD is able handle this (on ZFS or UFS2 file systems). I'm pretty sure this has been a feature for quite a while now. There are also OSX and Linux versions of ZFS that it would be useful to know about regarding this feature.
Just one more thing ...
Here is an especially nice feature for simple "forensics". Say we want to send our new_born file back to when time began, back to the leap second that never happened and when - in a moment of timeless time - Unix was born ... :-) 1. We can just change the date using touch -d and everyone will think new_born is old and wise, right?
Nope:
~/ % touch -d "1970-01-01T00:00:01" new_born
~/ % stat -f "Name:%t%N
Born:%t%SB
Access:%t%Sa
Modify:%t%Sm
Change:%t%Sc" new_born
Name: new_born
Born: May 7 12:38:35 2015
Access: Jan 1 00:00:01 1970
Modify: Jan 1 00:00:01 1970
Change: May 7 13:29:37 2015
It's always more truthful to actually be as young as you look :-)
Time and Unix - a subject both practical and poetic: after all, what is "change"; and what does it mean to "modify" or "create" something? Thanks for your great post Silvio - I hope it lives on and gathers useful answers.
You can improve and generalize your question if you can be more specific about your requirements for preserving, setting, archiving of file timestamp fields. Don't get me wrong: this is a very good question and it will continue to get up votes for a long time.
You might take a look at Dylan Leigh's presentation Forensic Timestamp Analysis of ZFS or even contact Dylan for clues on how to access crftime information.
[1] There was a legend that claimed in the beginning, seconds since long (SSL) ago was never less than date -u -j -f "%Y-%m-%d:%T" "1970-01-01:00:00:01" "+%s" because of a leap second ...

How to identify gnome-terminal profile?

I posted a question on askubuntu a while back. As there is no action, and I have also dug some more, I'll try here. Possibly a more correct place, (and I don't know if it is possible to move questions any more (I do not get any of those options)).
Anyhow:
Is there a way to get gnome-terminal profile ID? Need it in bash script – to do e.g. –
gconftool-2 "do some change to some value for current profile."
In my endeavour for an answer to this I have made some progress – but no satisfying solution. To be honest it truly scares me how shielded the application is from doing modifications from command line being a terminal emulator! To me it is incomprehensible.
Besides touching the source of gnome-terminal, (I do not wan a custom version), is there some legit way to get this? By the fact it is a wrapper for vte, it uses various shared libraries, some way I haven’t thought of, etc.
Add some C code into the mix is OK.
So far:
I have checked out the "save-config" option, but as it is 1. not satisfactory, aka 100%, and 2. more important this also is going to be removed it fails completely. See my own answer below for more detail.
There is no environment variable for this.
dbus: Doesn't seem to be any messages transmitted or any functions available for this kind of information. Have tested both current (3.6.0) version and latest develop.
injection: tho it is probable, and have played around with injecting custom code to it, it is such an error-prone endeavour it is not a solution.
If anyone wonder etc.
Decided to have another look at this - and made a little progress.
Using the built-in option --save-config there is these properties of interest:
Role=gnome-terminal-window-2587-1856448950-1359348087
ActiveTerminal=Terminal0xa896200
Geometry=110x87+900+1
WorkingDirectory=/home/xxx/tmp
Looking at it closer. Opened two windows in short succession and did a save-config.
Role
We can split it to the various parts:
gnome-terminal-window
2587
1856448950
1359348087
PID
2587 is same for both, and after a quick pstree 2587 -p we find it to be the PID. Further an echo $$ locates our bash (or which ever one prefer).
Time of start
Now the second number is wildly different giving a clue it is probably a random value. The last one, tho, is with only the last digit in diff. Most probably a time-stamp. I know I'm in tmp directory for this window - so, by using our knowledge of the proc file system:
# btime: boot time, in seconds since the Epoch
$ cat /proc/stat | grep ^btime | cut -d' ' -f2
1359039155
# starttime: The time in jiffies the process started after system boot.
$ cat /proc/$$/stat | cut -d' ' -f22
30893222
# WANT: 1359348087
btime + starttime / Hertz
1359039155 + (30893222 / 100) = 1359348087.22 ~ 1359348087
OK. Last digit is time-stamp on start by Epoch. But unfortunately it is not by jiffies and a rounded value so if we have started several windows by e.g. a script we can end up with same value.
(After some checking it also seems like seconds is rounded by round to nearest not towards zero etc.)
Random value
OK. So what about the value after PID? Most probably a random value, but to be sure. To check this we have to go to the source.
$ git clone git://git.gnome.org/gnome-terminal
$ gnome-terminal --version
GNOME Terminal 3.6.0
$ git log --grep="3\.6\.0"
commit f4d291a90dc4f513fc15f80fdebcdc3c3349b70a
...
Version 3.6.0
$ git checkout f4d291a90dc4f513fc15f80fdebcdc3c3349b70a
After some digging we find:
# terminal-util.c
48: void
terminal_util_set_unique_role (GtkWindow *window, const char *prefix)
{
char *role;
role = g_strdup_printf(
"%s-%d-%d-%d",
prefix,
getpid(),
g_random_int(),
(int) time (NULL)
);
gtk_window_set_role (window, role);
g_free (role);
}
OK. Not only do we confirm that the second is a random value, but also that PI and time is correct.
Geometry
xwininfo -id $(xdotool getactivewindow) | \
grep '^\s*-geometry' | \
sed 's/^\s*[^ ]* \(.*\)/\1/'
# yields 110x87+900+1
OK. Now we have three values to check against:
Time
Geometry
Path
Problem is that even with this we can easily have two windows with those values at same value. And more important; some genius has decided to remove this from the options of the application.
Terminal Window hex
Looking further at the code one find that the hex value in ActiveTerminal etc. is a pointer value to current address in memory of a struct holding current window. AKA not very usefull if one doesn't want to hack memory mappings.

how to write a command on linux

I have to implement a linux command, called DCMD, which has the following function: It must execute another default linux command to a certain date and time, both specified in input.
In short, I should write like this: dcmd "command" "date and time".
Well the problem is not date or hour, in fact I can manage it properly, if it is looking into the future, if the day, month and year are correct, etc. ..
Also the command I think I've figured out how to handle it: I used the system call "execlp" and it run properly.
Well, at this point I don't know how to merge command and data, that is, run the following command at the time indicated.
Could someone explain to me how to do?
On linux, use cron or at to schedule jobs for later running.
cron: Specify a cron job with your specific date. Format your command as minute hour day month ? year command and add it to your crontab file. cron will then run your job just once. Use crontab to handle your crontab file. Man page for crontab
at command: Syntax: at [-V] [-q queue] [-f file] [-mldbv] TIME to run the script on stdin at TIME. Alternatively, run script in a file with the -f flag. Man page for at
Additional information:This is a Operating System assignment in which I have to re-implement some of the features of "at" or "crontab".
I have found a way of how to solve this problem.
First of all I should call a "fork", then in the child process I should call the "execlp", while the parent process goes on.
If I want to delay the command, I'll call a "sleep" in the child process (I asked about this point to the professor a few days ago, and he said that it's fine).
But I have this question: is it a valid method? Does this method create zombie processes?

Resources