I have three directories:
/home/Desktop/1
/home/Desktop/2
/home/Desktop/3
In the directories 1 and 2 are executable C programs, which can be executed in the terminal like this ./tst1 or ./tst2.
In the directory 3 I have a bash script, which executes a C program tst3.c from the same directory.
I want to execute these C programs from directories 1 and 2 using my bash script in the directory 3 like this
#!/bin/bash
sudo ./tst3
sleep 1
sudo ./tst1 # from directory 1
sleep 2
sudo ./tst2 # from directory 2
Any ideas?
You have multiple options, including at least:
Set PATH to include the directories where your commands are found:
#!/bin/bash
export PATH="$PATH:/home/Desktop/1:/home/Desktop/2:/home/Desktop/3"
sudo tst3 # from directory 3
sleep 1
sudo tst1 # from directory 1
sleep 2
sudo tst2 # from directory 2
Use absolute paths to the commands:
#!/bin/bash
sudo /home/Desktop/3/tst3 # from directory 3
sleep 1
sudo /home/Desktop/1/tst1 # from directory 1
sleep 2
sudo /home/Desktop/2/tst2 # from directory 2
Use relative paths to the commands:
#!/bin/bash
sudo ../3/tst3 # from directory 3
sleep 1
sudo ../1/tst1 # from directory 1
sleep 2
sudo ../2/tst2 # from directory 2
These treat the directories symmetrically. Another alternative is to place the commands in a directory already on your PATH (like $HOME/bin, perhaps), and then run them without any path. This is what I'd normally do — ensure the commands to be run are in a directory on my PATH.
If you are simply trying to locate the scripts:
#!/bin/bash
base_dir="$( dirname "$( readlink -e "$0" )" )"/..
sudo "$base_dir/3/tst3"
sleep 1
sudo "$base_dir/1/tst1"
sleep 2
sudo "$base_dir/2/tst2"
or
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"/..
sudo 3/tst3
sleep 1
sudo 1/tst1
sleep 2
sudo 2/tst2
If you want the CWD to be changed the directory of each executable before executing it:
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"
sudo ./tst3
cd ../1
sleep 1
sudo ./tst1
cd ../2
sleep 2
sudo ./tst2
These scripts will work properly even if its launched from a directory other than the directory it's found in. They will even work if they are launched via a symlink!
Related
I have a computing cluster with four nodes A, B, C and D and Slurm Version 17.11.7. I am struggling with Slurm array jobs. I have the following bash script:
#!/bin/bash -l
#SBATCH --job-name testjob
#SBATCH --output output_%A_%a.txt
#SBATCH --error error_%A_%a.txt
#SBATCH --nodes=1
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=50000
FOLDER=/home/user/slurm_array_jobs/
mkdir -p $FOLDER
cd ${FOLDER}
echo $SLURM_ARRAY_TASK_ID > ${SLURM_ARRAY_TASK_ID}
The script generates the following files:
output_*txt,
error_*txt,
files named according to ${SLURM_ARRAY_TASK_ID}
I run the bash script on my computing cluster node A as follows
sbatch --array=1-500 example_job.sh
The 500 jobs are distributed among nodes A-D. Also, the output files are stored on the nodes A-D, where the corresponding array job has run. In this case, for example, approximately 125 "output_" files are separately stored on A, B, C and D.
Is there a way to store all output files on the node where I submit the script, in this case, on node A? That is, I like to store all 500 "output_" files on node A.
Slurm does not handle input/output files transfer and assumes that the current working directory is a network filesystem such as for instance NFS for the simplest case. But GlusterFS, BeeGFS, or Lustre are other popular choices for Slurm.
Use an epilog script to copy the outputs back to where the script was submitted, then delete them.
Add to slurm.conf:
Epilog=/etc/slurm-llnl/slurm.epilog
The slurm.epilog script does the copying (make this executable by chmod +x):
#!/bin/bash
userId=`scontrol show job ${SLURM_JOB_ID} | grep -i UserId | cut -f2 -d '=' | grep -i -o ^[^\(]*`
stdOut=`scontrol show job ${SLURM_JOB_ID} | grep -i StdOut | cut -f2 -d '='`
stdErr=`scontrol show job ${SLURM_JOB_ID} | grep -i StdErr | cut -f2 -d '='`
host=`scontrol show job ${SLURM_JOB_ID} | grep -i AllocNode | cut -f3 -d '=' | cut -f1 -d ':'`
hostDir=`scontrol show job ${SLURM_JOB_ID} | grep -i Command | cut -f2 -d '=' | xargs dirname`
hostPath=$host:$hostDir/
runuser -l $userId -c "scp $stdOut $stdErr $hostPath"
rm -rf $stdOut
rm -rf $stdErr
(Switching from PBS to Slurm without NFS or similar shared directories is a pain.)
We
would need monitoring on below folder for respective directories & sub directories to see if the files in the directory are greater than 100 files. Also none of the file should sit more than 4 hrs.
If files in the directory is more than 100 we would need an alert. Not sure why this script is working. Could you please confirm?
Path – /export/ftpaccounts/image-processor/working/
The Script:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
if [ -f ${LOCKFILE} ]; then
exit 0
fi
touch ${LOCKFILE}
NUM=`find /mftstaging/vim/inbound/active \
-ignore_readdir_race -depth -type f -m min +60 -print |
xargs wc -l`
if [[ ${NUM:0:1} -ne 0 ]]; then
echo "${NUM:0:1} files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
The format of your original post made it difficult to tell what you were trying to accomplish. If I understand you just want to find the number of files in the remote directory that are greater than 60 minutes old, then with a couple of changes your script should work fine. Try:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
ACTIVE=/mftstaging/vim/inbound/active
[ -f ${LOCKFILE} ] && exit 0
touch ${LOCKFILE}
# NUM=`find /mftstaging/vim/inbound/active \
# -ignore_readdir_race -depth -type f -m min +60 -print |
# xargs wc -l`
NUM=$(find $ACTIVE -type f -mmin +60 | wc -l)
## if [ $NUM -gt 100 ]; then # if you are test for more than 100
if [ $NUM -gt 0 ]; then
echo "$NUM files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
Note: you will want to implement some logic that deals with a stale lock file, and perhaps use trap to insure the lock is removed regardless of how the script terminates. e.g.:
trap 'rm -rf ${LOCKFILE}' SIGTERM SIGINT EXIT
I executed the following commands in unix machine:
blr59-adm1:~ # ls -l / | grep back
d-w--w--w- 2 root root 4096 Jun 9 13:31 backupmnt
blr59-adm1:~ # [ -x /backupmnt ]
blr59-adm1:~ # echo $?
0
blr59-adm1:~ #
I am not able to get why I am getting the output of echo $? as 0 even if my directory is not having execute permissions.
My shell script is failing because of this behavior.
Please correct me if I am doing wrong.
I am trying to run a Python script (qTrimHIV.py) on ALL the files ending in .fastq in the current directory. I'd like to append nohup at the beginning so that it does all the files out from a single command, and no matter if I close the terminal or logout.
The individual command:
for x in $(find . -name '*fastq'); do echo $x; python ../qTrimHIV.py -verbose -fastq $x -l 23 -m 20 -w 23 -o ${x%.fastq*}tr -mode 2; done
works well. But when I put nohup in front of the batch command:
nohup for x in $(find . -name '*fastq') ; do echo $x; python ../qTrimHIV.py -verbose -fastq $x -l 23 -m 20 -w 23 -o ${x%.fastq*}tr -mode 2; done
-bash: syntax error near unexpected token `do'
I get the error above.
However, it works well if I put the nohup before the actual command "python ../qTrimHIV.py ", but this is not really what I want, as I want to submit the task for all files at once and have it running until it's done, without having to keep logged in.
I've tried
for x in $(find . -name '*fastq') ; do echo $x; python ../qTrimHIV.py -verbose -fastq $x -l 23 -m 20 -w 23 -o ${x%.fastq*}qtrim -mode 2; done | at now
but it doesn't let me see the progress of the job, and I can't know if it is still running or not.
Can anyone give me any suggestions? Should I use a different command, other than nohup?
you can use:
nohup bash -c 'your command, everything including for'
also, why not to run shell script using nohup:
nohup <your script>
MacPorts installed "libiconv #1.14_0+universal" as a dependency on my system. This happens to be a 32-bit flavor and it started causing issue when I tried to compile a voice recognition software called Simon Listens.
While googling I found out that that Mac actually ships with a 64-bit flavor of libiconv by default and I was able to locate the said files on my system:
$ find /usr/lib -name libiconv*
/usr/lib/libiconv.2.4.0.dylib
/usr/lib/libiconv.2.dylib
/usr/lib/libiconv.dylib
In order to use the system library, the quickest way I could think of was to uninstall MacPort's version of libiconv so that the system's library would end up getting selected as a fallback as it has to present (my guess) somewhere downstairs on the PATH already.
But that failed due to dependecies:
$ sudo port uninstall libiconv #1.14_0+universal
Unable to uninstall libiconv #1.14_0+universal, the following ports depend on it:
...
So now my question is how can I tell MacPort to replace its dependency graph to point to and use the library already on my system?
Another approach to avoid MacPorts libiconv issues would be to build simon against a fresh MacPorts system plus the necessary packages such as cyrus-sasl2, zlib, portaudio and kdesdk4 in a custom location, e. g. /opt/macports-simon.
The following code worked for me on my machine running Mac OS X 10.6.8:
# compile simon on Mac OS X 10.6.8 using MacPorts for the installation of zlib, portaudio and kdesdk4
# http://www.simon-listens.org
# http://sourceforge.net/projects/speech2text/
# get a root shell
sudo -H -i
# prevent idle sleep
pmset -a force sleep 0 displaysleep 0 disksleep 0
mv -i /opt/local /opt/local-off
mv -i /usr/local /usr/local-off
cd /tmp
mkdir buildsimon || exit 1
cd buildsimon || exit 1
# create custom /opt/macports-simon to install zlib, portaudio and kdesdk4
# cf. http://guide.macports.org/#installing.macports.source.multiple
MP_PREFIX='/opt/macports-simon'
unset PATH
export PATH='/bin:/sbin:/usr/bin:/usr/sbin'
curl -L -O https://distfiles.macports.org/MacPorts/MacPorts-2.0.4.tar.bz2
tar -xjf MacPorts-2.0.4.tar.bz2
cd MacPorts-2.0.4 || exit 1
./configure --prefix="${MP_PREFIX}" --with-applications-dir="${MP_PREFIX}/Applications"
make
make install
cd /tmp/buildsimon
unset PATH
export PATH="${MP_PREFIX}/bin:/bin:/sbin:/usr/bin:/usr/sbin"
# get the Portfiles and update the system
port -v selfupdate
# install cyrus-sasl2
port -f uninstall cyrus-sasl2
port clean --all cyrus-sasl2
port extract cyrus-sasl2
cd "$(port dir cyrus-sasl2)"/work/cyrus-sasl-2.1.23
printf '%s\n' H '/\(darwin\[15\]\)/s//\1./g' wq | sudo ed -s config/ltconfig
printf '%s\n' H '/\(darwin\[15\]\)/s//\1./g' wq | sudo ed -s saslauthd/config/ltconfig
cd /tmp/buildsimon
port -f -s install cyrus-sasl2
otool -L /opt/macports-simon/lib/libsasl2.dylib
port -f install zlib
port -f install portaudio
port -f install kdesdk4
port installed zlib portaudio kdesdk4 cyrus-sasl2
# enable dbus with launchd
# http://www.freedesktop.org/wiki/Software/dbus
# open -e dbus-1.5.8/README.launchd
launchctl load -w /Library/LaunchDaemons/org.freedesktop.dbus-system.plist
launchctl load -w /Library/LaunchAgents/org.freedesktop.dbus-session.plist
sudo -u _mysql mysql_install_db5
sudo port load mysql5-server
# todo: how to configure simon to use /opt/macports-simon directly?
ln -isv "${MP_PREFIX}" /usr/local
cd /tmp/buildsimon
# http://sourceforge.net/projects/speech2text/
curl -L -O http://netcologne.dl.sourceforge.net/project/speech2text/simon/0.3.0/simon-0.3.0.tar.bz2
tar -xjf simon-0.3.0.tar.bz2
cd simon-0.3.0 || exit 1
# Note that /usr/local got symlinked to "${MP_PREFIX}" above!
unset PATH
export PATH='/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin'
# the following commands are taken from simon-0.3.0/build.sh
mkdir build 2> /dev/null
cd build || exit 1
cmake -DCMAKE_INSTALL_PREFIX=`kde4-config --prefix` ..
# append ${MP_PREFIX}/lib/libiconv.dylib to gcc command in link.txt file
printf '%s\n' H '/\/usr\/bin\/gcc/s|\(.*\)|\1 '"${MP_PREFIX}"'/lib/libiconv.dylib|' wq |
ed -s julius/julius/CMakeFiles/juliusexe.dir/link.txt
# replace gcc option ' -bundle ' with ' -dynamiclib '
egrep -Ilsr -Z -e ' -bundle ' . |
xargs -0 -n 1 /bin/sh -c 'printf "%s\n" H "g/ -bundle /s// -dynamiclib /g" wq | /bin/ed -s "${1}"' argv0
make
touch ./julius/gramtools/mkdfa/mkfa-1.44-flex/*
make
make install
# ldconfig # not needed on Mac OS X
kbuildsycoca4
echo -e "**** Build completed ****\n\nThe executable file \"simon\" is now ready and has been installed.\n\nIssue \"simon\" to start it."
unset PATH
export PATH="${MP_PREFIX}/bin:/bin:/sbin:/usr/bin:/usr/sbin"
otool -L "${MP_PREFIX}/bin/simon"
simon
mv -i /opt/local-off /opt/local
mv -i /usr/local-off /usr/local