I wan't to run a script that does some file alteration on .php files.
There are hundreds of EmailController.php files in different sites that should be modified based on the site-name depending on what folder they are located.
#!/bin/bash
source /root/sitenames.txt
sed -i 's#'"/var/vmail/skeleton.com/"'#'"/var/vmail/$sitename/"'#g' /var/www/$sitename/web/EmailController.php
The easiest way would be to read sitenames.txt file that would contain the domain-names one per line and substitute that domain with $sitename in the bash script.
#tom-fenech is right on in saying this should be in config file rather than hardcoded into your .php files. Regardless, you need to change what you have. And, you'll need to do something like this to change to a config file anyways.
Short Answer
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
Which is mostly equivalent to:
find "${skeldir}" -type f -print0 \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
I like the fgrep version better because it runs sed on a smaller set of files than find (assuming your pattern isn't in every file).
Long answer
Putting this together:
$ cat /tmp/x.sh
#!/bin/sh
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
[ -d "${skeldir}" ] && rm -rf "${skeldir}"
mkdir -p "${skeldir}/subdir"
echo 'ignore this line' \
| tee "${skeldir}/file1.php" "${skeldir}/subdir/file2.php" "${skeldir}/file3.php" \
> "${skeldir}/subdir/file4.php"
echo "foo /var/vmail/${skelsite}/ bar" \
| tee -a "${skeldir}/file1.php" >> "${skeldir}/subdir/file2.php"
echo "BEFORE:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
# make changes (--null/-0 ensures you can have spaces, etc, in filenames)
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
# Alternate:
# find "${skeldir}" -type f -print0 \
# | xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
echo "AFTER:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
And see what happens:
$ /tmp/x.sh
BEFORE:
Files that have "skeleton.com": 2
Files that have "example.com": 0
AFTER:
Files that have "skeleton.com": 0
Files that have "example.com": 2
You may consider running a backup before doing this! Something like:
$ rsync -avP --delete /var/www/$sitename/ /var/www.backup/$sitename/
Related
I'm looking for a simple bash script which, when given the name of a system header, will return its full path from which it would be read in a #include <header> statement. I already have an analogous thing for looking up the library archive used by linker.
ld -verbose -lz -L/some/other/dir | grep succeeded | sed -e 's/^\s*attempt to open //' -e 's/ succeeded\s*$//'
For example, this will return the path of libz archive (/lib/x86_64-linux-gnu/libz.so on my system).
For the requested script I know that I could take a list of include directories used by gcc and search them for the file myself, but I'm looking for a more accurate simulation of what's happening inside the preprocessor (unless it's that simple).
Pipe the input to preprocessor and then process the output. Gcc preprocessor output inserts # lines with information and flags that you can parse.
$ f=stdlib.h
$ echo "#include <$f>" | gcc -xc -E - | sed '\~# [0-9]* "\([^"]*/'"$f"'\)" 1 .*~!d; s//\1/'
/usr/include/stdlib.h
It can output multiple files, because gcc has #include_next and can improperly detect in some complicated cases where multiple filenames are included with the same name, like in f=limits.h. So you could also filter exactly second line, knowing that the first line is always going to be stdc-predef.h:
$ f=limits.h; echo "#include <$f>" | gcc -xc -E - | sed '/# [0-9]* "\([^"]*\)" 1 .*/!d;s//\1/' | sed '2!d'
/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0/include-fixed/limits.h
But really search the include paths yourself, it's not that hard:
$ f=limits.h; echo | gcc -E -Wp,-v - 2>&1 | sed '\~^ /~!d; s/ //' | while IFS= read -r path; do if [[ -e "$path/$f" ]]; then echo "$path/$f"; break; fi; done
/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0/include-fixed/limits.h
You can use the preprocessor to do the work:
user#host:~$ echo "#include <stdio.h>" > testx.c && gcc -M testx.c | grep 'stdio.h'
testx.o: testx.c /usr/include/stdc-predef.h /usr/include/stdio.h \
You can add a bit bash-fu to cut the part you are interested in
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 1 year ago.
I thought that my problem is trivial, but I cannot figure out, why my scripts only performing once in array.
I have a jenkins job (bash script). This job gathering hostnames and sends ssh commands, through script, using gathered info:
rm /tmp/hosts
docker exec -t tmgnt_consul_1 consul members -status=alive | grep -v Node | awk '{print $1}' | cut -d : -f1 >> /tmp/hosts
sed -i '/someunnecessaryinfo/d' /tmp/hosts
echo >> /tmp/hosts
shopt -s lastpipe
while IFS= read -r line; do
echo "host is >>$line<<";
url="http://111.111.111.111:8500/v1/catalog/nodes"
term_IP=`curl -s $url | jq -r --arg Node "${line}" '.[] | select(.Node == "'${line}'" )|.Address' --raw-output`
echo $term_IP
sudo bash -x /home/rtm/t_mgnt/check_fw $term_IP
done < /tmp/hosts
Second script:
#!/bin/bash
term_IP=$1
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
if [ $? != 0 ]; then
sudo sshpass -p 'some.pass' \
scp -n -o StrictHostKeyChecking=no -r /home/rtm/t_mgnt/nv9 user#$term_IP:
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo mv nv9 /root/"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo dpkg -i /root/nv9/libudev0_175-0ubuntu9_amd64.deb"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
else
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
fi
The job is working fine, and returns correct values, but only for the first element of array.
PS - I already searched through this and other sites, and - following answer didn't help me - Shell script while read line loop stops after the first line (already "ssh -n -o").
Perhaps you can point me, what I missed.
Possibly this ssh call eats your input:
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
^^^
Try adding -n.
I have a computing cluster with four nodes A, B, C and D and Slurm Version 17.11.7. I am struggling with Slurm array jobs. I have the following bash script:
#!/bin/bash -l
#SBATCH --job-name testjob
#SBATCH --output output_%A_%a.txt
#SBATCH --error error_%A_%a.txt
#SBATCH --nodes=1
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=50000
FOLDER=/home/user/slurm_array_jobs/
mkdir -p $FOLDER
cd ${FOLDER}
echo $SLURM_ARRAY_TASK_ID > ${SLURM_ARRAY_TASK_ID}
The script generates the following files:
output_*txt,
error_*txt,
files named according to ${SLURM_ARRAY_TASK_ID}
I run the bash script on my computing cluster node A as follows
sbatch --array=1-500 example_job.sh
The 500 jobs are distributed among nodes A-D. Also, the output files are stored on the nodes A-D, where the corresponding array job has run. In this case, for example, approximately 125 "output_" files are separately stored on A, B, C and D.
Is there a way to store all output files on the node where I submit the script, in this case, on node A? That is, I like to store all 500 "output_" files on node A.
Slurm does not handle input/output files transfer and assumes that the current working directory is a network filesystem such as for instance NFS for the simplest case. But GlusterFS, BeeGFS, or Lustre are other popular choices for Slurm.
Use an epilog script to copy the outputs back to where the script was submitted, then delete them.
Add to slurm.conf:
Epilog=/etc/slurm-llnl/slurm.epilog
The slurm.epilog script does the copying (make this executable by chmod +x):
#!/bin/bash
userId=`scontrol show job ${SLURM_JOB_ID} | grep -i UserId | cut -f2 -d '=' | grep -i -o ^[^\(]*`
stdOut=`scontrol show job ${SLURM_JOB_ID} | grep -i StdOut | cut -f2 -d '='`
stdErr=`scontrol show job ${SLURM_JOB_ID} | grep -i StdErr | cut -f2 -d '='`
host=`scontrol show job ${SLURM_JOB_ID} | grep -i AllocNode | cut -f3 -d '=' | cut -f1 -d ':'`
hostDir=`scontrol show job ${SLURM_JOB_ID} | grep -i Command | cut -f2 -d '=' | xargs dirname`
hostPath=$host:$hostDir/
runuser -l $userId -c "scp $stdOut $stdErr $hostPath"
rm -rf $stdOut
rm -rf $stdErr
(Switching from PBS to Slurm without NFS or similar shared directories is a pain.)
I'm trying to convert a hierarcy of TIFF image files into JPG, and out of boredom, I want to do find and ffmpeg in a single file.
So I set find to invoke sh with the -s flag, like thins:
#!/bin/sh
export IFS=""
find "$#" -iname 'PROC????.tif' -exec sh -s {} + << \EOF
for t ; do
ffmpeg -y -v quiet -i $t -c:v mjpeg ${t%.*}.jpg
rm $t
done
EOF
However, there's just too many files in the directory hierarchy, and find chopped filename array into several small pieces, and sh -s was only succesfully invoked for the first argument chunk.
The question being: how could one feed such in-body command to every sh invocation in the find command?
Update
The tag "heredoc" on the question is intended for receiving answers that do not rely on external file or self-referencing through $0. It is also intended that no filename would go through string-array processing such as padding with NUL-terminator or newline, and can be directly passed as arguments.
The heredoc is being used as the input to find. I think your best bet is to not use a heredoc at all, but just use a string:
#!/bin/sh
find "$#" -iname 'PROC????.tif' -exec sh -c '
for t ; do
ffmpeg -y -v quiet -i "$t" -c:v mjpeg "${t%.*}.jpg" &&
rm "$t"
done
' sh {} +
I am re-writing your code below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=${imgfile%.*}.jpg
cd "$path"
ffmpeg -y -v quiet -i "$imgfile" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
IF you don't want to change the current directory you can do it like below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
# If you list down all the .tif file use below command
# find "$1" -name "*.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=$path${imgfile%.*}.jpg
ffmpeg -y -v quiet -i "$filepath" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
rm -rf /tmp/imagefile_list.txt
We
would need monitoring on below folder for respective directories & sub directories to see if the files in the directory are greater than 100 files. Also none of the file should sit more than 4 hrs.
If files in the directory is more than 100 we would need an alert. Not sure why this script is working. Could you please confirm?
Path – /export/ftpaccounts/image-processor/working/
The Script:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
if [ -f ${LOCKFILE} ]; then
exit 0
fi
touch ${LOCKFILE}
NUM=`find /mftstaging/vim/inbound/active \
-ignore_readdir_race -depth -type f -m min +60 -print |
xargs wc -l`
if [[ ${NUM:0:1} -ne 0 ]]; then
echo "${NUM:0:1} files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
The format of your original post made it difficult to tell what you were trying to accomplish. If I understand you just want to find the number of files in the remote directory that are greater than 60 minutes old, then with a couple of changes your script should work fine. Try:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
ACTIVE=/mftstaging/vim/inbound/active
[ -f ${LOCKFILE} ] && exit 0
touch ${LOCKFILE}
# NUM=`find /mftstaging/vim/inbound/active \
# -ignore_readdir_race -depth -type f -m min +60 -print |
# xargs wc -l`
NUM=$(find $ACTIVE -type f -mmin +60 | wc -l)
## if [ $NUM -gt 100 ]; then # if you are test for more than 100
if [ $NUM -gt 0 ]; then
echo "$NUM files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
Note: you will want to implement some logic that deals with a stale lock file, and perhaps use trap to insure the lock is removed regardless of how the script terminates. e.g.:
trap 'rm -rf ${LOCKFILE}' SIGTERM SIGINT EXIT