I have to implement a linux command, called DCMD, which has the following function: It must execute another default linux command to a certain date and time, both specified in input.
In short, I should write like this: dcmd "command" "date and time".
Well the problem is not date or hour, in fact I can manage it properly, if it is looking into the future, if the day, month and year are correct, etc. ..
Also the command I think I've figured out how to handle it: I used the system call "execlp" and it run properly.
Well, at this point I don't know how to merge command and data, that is, run the following command at the time indicated.
Could someone explain to me how to do?
On linux, use cron or at to schedule jobs for later running.
cron: Specify a cron job with your specific date. Format your command as minute hour day month ? year command and add it to your crontab file. cron will then run your job just once. Use crontab to handle your crontab file. Man page for crontab
at command: Syntax: at [-V] [-q queue] [-f file] [-mldbv] TIME to run the script on stdin at TIME. Alternatively, run script in a file with the -f flag. Man page for at
Additional information:This is a Operating System assignment in which I have to re-implement some of the features of "at" or "crontab".
I have found a way of how to solve this problem.
First of all I should call a "fork", then in the child process I should call the "execlp", while the parent process goes on.
If I want to delay the command, I'll call a "sleep" in the child process (I asked about this point to the professor a few days ago, and he said that it's fine).
But I have this question: is it a valid method? Does this method create zombie processes?
Related
I created step-scripted that print for me the name of function the debugger calls.
Now I want to automate the part that I need to type:
thread step-scripted -C MyTrace.Trace
How can I run the above command from a script?
so I will do something like this:
script
while True:
thread step-scripted -C MyTrace.Trace
First off, there's no reason that a step plan has to do just one step. If you want to step forever, then just have the step plan do it - never set the plan to complete, and return false from should_stop. Even more convenient, if you are using a recent lldb, you can pass arguments to your scripted step plan using the -k <key> -v <value> arguments. So you could also have your plan take a "count" input, and step that many times.
Otherwise, the easiest way to do this is to use the Python interface to implement a custom command that automates this step. SBThreads are the things you step. If you use the command form that takes an SBExecutionContext, described here:
https://lldb.llvm.org/use/python-reference.html#id6
you can get the thread from SBExecutionContext.thread then use SBThread. StepUsingScriptedThreadPlan to call your thread plan to do the step. Once you are in python, writing a loop to do this forever or till some condition, etc. should be easy. Your command could also take number of times to step, etc.
Note, you can also run commands in the script interpreter using SBCommandInterpreter.HandleCommand if that seems easier to you.
I'm trying to figure out how ntpd (from busybox) works.
I'm running the following scenario, for a test sake:
set up date/time, using date -s, to any arbitrary date/time (e.g. 2000-01-01 00:00:00);
run the command ntpd -N -p <server_address> to start the daemon. Just after that, the date/time is successfully sync;
change the date/time againt, using date -s, to the same used in the 1st step (i.e. 2000-01-01 00:00:00);
After that, I have been expecting that date/time was synchronized again, but this doesn't occur, even if I wait for a couple of hours.
My question is: my comprehension about the ntpd's behavior is correct? Should the date/time be resync automatically after the 3rd step? If not, what should I do to resync the date/time?
I would check internaly in the trimmed busybox implementation if the use case is actually covered. Some options could be actually ignored and that can cause confusion.
If not, in case it is a yocto based embedded system, you should consider bring the actual and complete ntpd instead of the busybox one.
I am wanting to make a terminal app that stores information about files/directories. I want a way to keep the information if the file is moved or renamed.
What I thought I could do is have a function execute before any command is run. I found this:
http://www.twistedmatrix.com/users/glyph/preexec.bash.txt
But I was wondering if this would be a good way to go about it. Or should I do something else?
I would like to call that function from a C program whenever mv is entered I suppose.
If what you're trying to do is attach some sort of metadata to files, there's a much better supported way to do that -- extended attributes.
Another solution might be to use the file's inode number as an index into a database you maintain yourself.
Can you alias the mv command? in .profile or .bashrc
alias mv=/usr/bin/local/mymv
where mymv is a compiled executable that runs your C code function and calls /usr/bin/mv.
precmd and preeexec add some overhead to every bash script that gets run, even if the script never calls mv. The downside to alias is that it requires new code in /usr/local and if scripts or users employ /usr/bin/mv instead of mv it will not do what you want. Generally doing something like this often means there is a better way to handle the problem with some kind of service (daemon) or driver. Plus, what happens if your C code cannot correctly handle interesting input like
mv somefille /dev/null
If you want to run command each time after some command was executed in the terminal, just put the following in ~/.bashrc:
PROMPT_COMMAND="your_command;$PROMPT_COMMAND"
If you want your command to be executed each time before mv is executing, put the following in ~/.bashrc:
alias mv="your_script"
Make sure that your script will execute real mv if needed.
You can use inotify library to track filesystem changes. It's good solution, but once user remove file, it's already gone.
You might be able to make use of the DEBUG trap in Bash.
From man bash:
If a sigspec is DEBUG, the command arg is executed before every
simple command, for command, case command, select command, every
arithmetic for command, and before the first command executes in
a shell function
I found this article when I was forced to work in tcsh and wanted to ensure a specific environemtn variable was present when the user ran a program from a certain folder (without setting that variable globally)
tcsh can do this.
tcsh has special alias, one of which is precmd
This can be used to run a script just before the shell prompt is printed.
e.g. I used set precmd 'bash $HOME/.local/bin/on_cd.sh'
This might be one of the very few useful features in csh.
It is a shame but I don't think the same or similar feature is in bash or other sh derivites (ash, dash etc). Related answer.
I have written a program which calculates the amount of battery level available in my laptop. I have also defined a threshold value in the program. Whenever the battery level falls below threshold i would like to call another process. I have used system("./invoke.o") where invoke.o is the program that i have to run. I am running a script which runs the battery level checker program for every 5 seconds. Everything is working fine but when i close the bash shell the automatic invocation of invoke.o is not happening. How should i make the invoke.o to be invoked irrespective of whether bash is closed or not??. I am using UBUNTU LINUX
Try running it as: nohup ./myscript.sh, where the nohup command allows you to close the shell without terminating the process.
You could run your script as a cron job. This lets cron set up standard input and output for you, reschedule the job, and it will send you email if it fails.
The alternative is to run a script in the background with all input and output, including standard error output, redirected.
While you could make a proper daemon out of your program that kind of effort is probably not necessary.
man nohup
man upstart
man 2 setsid (more complex, leads to longer trail of breadcrumbs on daemon launching).
I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?
The calling thread should block indefinitely until the task you initiated with system() completes. If what you are observing is that the call returns and the file operation as not completed it is an indication that the spawned operation failed for some reason.
What does the return value indicate?
Almost certainly not a problem with use of system(), but with the operation you're performing. Always check the return value, but even more so, you'll want to see the output of the command you're calling. For non-interactive use, it's often best to write stdout and stderr to log files. One way to do this is to write a wrapper script that checks for the underlying command, logs the commandline, redirects stdout and stderr (and closes stdin if you want to be careful), then execs the commandline. Run this via system() rather than the OS command directly.
My bet is that the failing machines have limited disk space, or are missing either the target file or the actual gzip/gunzip commands.
I'm using unix system() calls to
gunzip and gzip files.
Probably silly question: why not use zlib directly from your application?
And system() isn't a system call. It is a wrapper for fork()/exec()/wait(). Check the system() man page. If it doesn't unblock, it might be that your application interferes somehow with wait() - e.g. do you have a SIGCHLD handler?
If it's a Linux system I would recommend using strace to see what's going on and which syscall blocks.
You can even attach strace to already running processes:
# strace -p $PID
Sounds like I'm running into the same intermittent issue indicating a timeout of some kind. My script runs every day. I'm starting to believe GZIP has a timeout.
gzip -vd filename.txt.gz 2>> tmp/errorcatch.txt 1>> logfile.log
stderr: Error for filename.txt.gz
Moves to next command 'cp filename* new/directory/', resulting in zipped version of filename in new directory
stdout from earlier gzip showing successful unzip of SAME file:
filename.txt.gz: 95.7% -- replaced with filename.txt
Successful out file from gzip is not there in source or new directory.
Following alerts, manual run of 'gzip -vd filename.txt.gz' never fails.
Details:
Only one call in script to unzip that file
Call for unzip is inside a function (for more rebust logging and alerting)
Unable to strace in production
Unable to replicate locally
In occurences over last month, found no consistency among file size, only
I'll simply be working around it with a retry logic and general scripting improvements, but I want the next google-er to know they're not crazy. This is happening to other people!