I'm trying to write a makefile to replace one of the scripts used in building a fairly large application.
The current script compiles one file at a time, and the primary reason for using make is to parallelise the build process. Using make -j 16 I currently get a factor of 4 speedup on our office server.
But what I've lost is some readability of the output. The compilation program for a file bundles up a few bits and pieces of work, including running custom pre-compilers, and running the gcc command. Each of these steps outputs some information, and I would prefer it to buffer the output from the command, and then show the whole lot in one go.
Is it possible to make make do this?
If you upgrade to GNU make 4.0, then you can use the built-in output synchronization feature to get what you want.
If you don't want to upgrade, then you'll have to modify each of your recipes to be wrapped with a small program that manages the output. Or you can set the SHELL variable to something that does it for you. Searching the internet should give you some examples.
A simple way to accomplish this is to send all the log output a to log directory with each file named, say:
log_file_20131104_12013478_b.txt // log_file_<date>_<time>_<sequence letter>.txt
and then simply cat them all together as your last make job in the dependency chain:
cat log_dir/log_file_20131104_12013478_*.txt > log_file_20131104_12013478.txt
With makepp this is the default behaviour as soon as you use -j. All the individual outputs (and entering dir messages) get collected and are output together as soon as the command terminates.
Related
Even if the output files of a Snakemake build already exist, Snakemake wants to rerun my entire pipeline only because I have modified one of the first input or intermediary output files.
I figured this out by doing a Snakemake dry run with -n which gave the following report for updated input file:
Reason: Updated input files: input-data.csv
and this message for update intermediary files
reason: Input files updated by another job: intermediary-output.csv
How can I force Snakemake to ignore the file update?
You can use the option --touch to mark them up to date:
--touch, -t
Touch output files (mark them up to date without
really changing them) instead of running their
commands. This is used to pretend that the rules were
executed, in order to fool future invocations of
snakemake. Fails if a file does not yet exist.
Beware that this will touch all your files and thus modify the timestamps to put them back in order.
In addition to Eric's answer, see also the ancient flag to ignore timestamps on input files.
Also note that the Unix command touch can be used to modify the timestamp of an existing file and make it appear older than it actually is:
touch --date='2004-12-31 12:00:00' foo.txt
ls -l foo.txt
-rw-rw-r-- 1 db291g db291g 0 Dec 31 2004 foo.txt
In case --touch (with --force, --forceall or --forcerun as the official documentation says that needs to be used in order to force the "touch" if doesn't work by itself) didn't work out as expected, ancient is not an option or it would need to modify too much from the workflow file, or you faced https://github.com/snakemake/snakemake/issues/823 (that's what happened to me when I tried --force and --force*), here is what I did to solve this solution:
I noticed that there were jobs that shouldn't be running since I put files in the expected paths.
I identified the input and output files of the rules that I didn't want to run.
In the order of the rules that were being executed and I didn't want to, I executed touch on the input files and, after, on the output files (taking into account the order of the rules!).
That's it. Since now the timestamp is updated according the rules order and according the input and output files, snakemake will not detect any "updated" files.
This is the manual method, and I think is the last option if the methods mentioned by the rest of people don't work or they are not an option somehow.
Essentially, I wish to call compiled C code from inside a bash script. I want to be able to call the ./a.out from any directory and have it be executed.
This all stems from something pretty simple. I made a curses program that is a screensaver for a terminal. I wish to be able to call a bash command screensaver and I want that to call the c code via bash. I want to be able to call it from anywhere in the filesystem. I am running on a 2013 Macbook but I think this is more of an infamiliarity with C issue rather than a hardware issue, I can provide more details if needed.
File is here:
/Users/User/screensaver/screensaver.c
cd /Users/User/screensaver
gcc screensaver.c creates a.out
I can then run
./a.out
And the code runs.
I have tried calling ./Users/User/screensaver/a.out among other things.
This doesn't work and it just says that the file doesn't exist. I've tried using exec and source but nothing has worked. Surely there must be a way to call this from somewhere else right? I know I could theoretically save my current directory as an environment variable, cd into the dir, ./a.out, then on quit cd back into the saved dir, but that seems like to much struggle for what its worth.
Edit: I saw that I could theoretically put it in the my bin and compiled with -o. I haven't tried it, but I don't want to do that because this code is still in development so I don't want to have to compiled and move it every time.
This worked:
"Try to invoke /Users/User/screensaver/a.out without putting a dot at the beginning of the path. There is a paticular security reason why you need to specify ./a.out rather than a.out when you are in the directory which holds the executable."
-tshiono
I have a situation where I submitted jobs that have been running for five days but due to a bug introduced all the work could be lost. I made a 'system' call to compress the data file and then remove the original uncompressed file that could be as big as 4G. So I have this in the C code
strcpy(command,"data"); ////I should added a forward slash here "data/"
sprintf(command,"%scompress -c -i %s -o %s",command,name,out_name);
system(command);
remove(name); /////This is the problem
The bug is in the sprintf line, in which what I wanted to do was to call a program in data/compress, but due to the missing '/' the system command fails. And thus the data produced is not compressed AND then immediately the original file is DELETED leaving me with nothing! If it was compressed it would have been OK.
There are currently five running jobs in such a state. I need to divert this behavior somehow so that I don't lose five days work. I am thinking to create a fake script named 'datacompress' in the current directory to change the behavior of the running program. Can I do this or are there better options, if at all?
You can make datacompress a symbolic link to data/compress. Oops, this won't work unless the process's $PATH includes ..
Another option: remove the user's write permission to the directory containing name. This will cause the remove() function to fail.
If your system has Access Control Lists, remove the process's delete permission on the uncompressed file.
While you're trying to come up with a solution, you can suspend the process with:
kill -STOP <pid>
Create hard links (not symbolic links) to the data files:
ln datafile datafile.bkp
When the program removes the original datafile, the file's contents will remain under the .bkp filename.
And then fix the program to check error status of important things like the compress command.
I am wanting to make a terminal app that stores information about files/directories. I want a way to keep the information if the file is moved or renamed.
What I thought I could do is have a function execute before any command is run. I found this:
http://www.twistedmatrix.com/users/glyph/preexec.bash.txt
But I was wondering if this would be a good way to go about it. Or should I do something else?
I would like to call that function from a C program whenever mv is entered I suppose.
If what you're trying to do is attach some sort of metadata to files, there's a much better supported way to do that -- extended attributes.
Another solution might be to use the file's inode number as an index into a database you maintain yourself.
Can you alias the mv command? in .profile or .bashrc
alias mv=/usr/bin/local/mymv
where mymv is a compiled executable that runs your C code function and calls /usr/bin/mv.
precmd and preeexec add some overhead to every bash script that gets run, even if the script never calls mv. The downside to alias is that it requires new code in /usr/local and if scripts or users employ /usr/bin/mv instead of mv it will not do what you want. Generally doing something like this often means there is a better way to handle the problem with some kind of service (daemon) or driver. Plus, what happens if your C code cannot correctly handle interesting input like
mv somefille /dev/null
If you want to run command each time after some command was executed in the terminal, just put the following in ~/.bashrc:
PROMPT_COMMAND="your_command;$PROMPT_COMMAND"
If you want your command to be executed each time before mv is executing, put the following in ~/.bashrc:
alias mv="your_script"
Make sure that your script will execute real mv if needed.
You can use inotify library to track filesystem changes. It's good solution, but once user remove file, it's already gone.
You might be able to make use of the DEBUG trap in Bash.
From man bash:
If a sigspec is DEBUG, the command arg is executed before every
simple command, for command, case command, select command, every
arithmetic for command, and before the first command executes in
a shell function
I found this article when I was forced to work in tcsh and wanted to ensure a specific environemtn variable was present when the user ran a program from a certain folder (without setting that variable globally)
tcsh can do this.
tcsh has special alias, one of which is precmd
This can be used to run a script just before the shell prompt is printed.
e.g. I used set precmd 'bash $HOME/.local/bin/on_cd.sh'
This might be one of the very few useful features in csh.
It is a shame but I don't think the same or similar feature is in bash or other sh derivites (ash, dash etc). Related answer.
Basically I want to do a program almost like a keylogger. The thing is that I as network admin sometimes I don't remember what I did to a machine on certain case, or same times I make howto's and tutorials for linux. I want to record what have i done.
So basically the idea of this program is:
you type the name of the program, (I call it rat for the moment)
$ rat
Welcome everything from now on will be recorded
recording $ ls
file1 file2 file3
recording $ quit
Bye bye
Everything you do will go out to an xml file. Something like this
<?xml version='1.0' encoding='UTF-8' ?>
<rat>
<command>
<input>ls</input>
<output>file1 file2 file3</output>
<err><err>
</command>
</rat>
i am doing some tests with fp_in = popen( input, "w");
and system, but first with popen i cant change directories and with "system i cant properly manage the input and output.
I was also checking if there is something I can do to bash like a plugin but haven't find any information.
At some points if feels like it I should create another shell (which is way beyond my current abilities) or fork bash sh. But it should been that complicated right.
I am open to suggestion where to start.
I am rusty with C, so I am reading again a lot of basic stuff.
With the xml file, later i was thinking on making a program to store this data and/or editing this data so i can create tutials and howto.
I can think of many ways of expanding this up to using printscreen so all the stored images go to a file you can upload to a server (for the moment i am glad to store the data). It could be a usefull tool.
ps. I do know this can be use for evil things too.
There already exists the script command, which will record all input and output into the terminal, writing it into a transcript. I would recommend just using that, unless you have particular needs that it doesn't meet. Actually, the nicest version of script that I've seen has been the NetBSD version, so you may want to look into that if the Linux version doesn't meet your needs.
If you would like to write it yourself, instead of using system, I would recommend that you use fork/exec to create a single shell process, which you copy all input and output into. To get an idea of how this works, I'd recommend looking at the source code for an existing version of script.
Most shells have a script built-in which will simply record the text in- and out- from the command line. Not quite what you're looking for... To my surprise script is not a built in, which means it is a model for building what you want.
The script command does almost what you want: it simply records the text in- and out- from the command line.
If you make your prompt distinctive (so that you can reliably tell the difference between shell commands and everything else) you can post-process the output of script to achieve your goals. Alternately you can hack script to get it to emit the XML you're looking for.
You can also try approaching this from a different angle. Instead of using a regular shell, connect to the machine using ssh or telnet and run your commands that way. Many ssh/telnet clients (PuTTY, for instance) have an option to log all console input and output during the session. You should be able to post-process this log to generate whatever type of logfile that you need.
Depending on your setup, you might not even have to use a second machine (you should be able to ssh into yourself).