I wan't to run a script that does some file alteration on .php files.
There are hundreds of EmailController.php files in different sites that should be modified based on the site-name depending on what folder they are located.
#!/bin/bash
source /root/sitenames.txt
sed -i 's#'"/var/vmail/skeleton.com/"'#'"/var/vmail/$sitename/"'#g' /var/www/$sitename/web/EmailController.php
The easiest way would be to read sitenames.txt file that would contain the domain-names one per line and substitute that domain with $sitename in the bash script.
#tom-fenech is right on in saying this should be in config file rather than hardcoded into your .php files. Regardless, you need to change what you have. And, you'll need to do something like this to change to a config file anyways.
Short Answer
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
Which is mostly equivalent to:
find "${skeldir}" -type f -print0 \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
I like the fgrep version better because it runs sed on a smaller set of files than find (assuming your pattern isn't in every file).
Long answer
Putting this together:
$ cat /tmp/x.sh
#!/bin/sh
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
[ -d "${skeldir}" ] && rm -rf "${skeldir}"
mkdir -p "${skeldir}/subdir"
echo 'ignore this line' \
| tee "${skeldir}/file1.php" "${skeldir}/subdir/file2.php" "${skeldir}/file3.php" \
> "${skeldir}/subdir/file4.php"
echo "foo /var/vmail/${skelsite}/ bar" \
| tee -a "${skeldir}/file1.php" >> "${skeldir}/subdir/file2.php"
echo "BEFORE:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
# make changes (--null/-0 ensures you can have spaces, etc, in filenames)
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
# Alternate:
# find "${skeldir}" -type f -print0 \
# | xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
echo "AFTER:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
And see what happens:
$ /tmp/x.sh
BEFORE:
Files that have "skeleton.com": 2
Files that have "example.com": 0
AFTER:
Files that have "skeleton.com": 0
Files that have "example.com": 2
You may consider running a backup before doing this! Something like:
$ rsync -avP --delete /var/www/$sitename/ /var/www.backup/$sitename/
I have a cronjob that runs every 24 hours to tell me if files on my server have changed. The script is as such:
find /home/bsc1933 -type f -ctime -1 -exec ls -ls {} \; | mail -E -s “File Changes, Past 24 Hours” myemail
I would like to modify it to exclude a specific folder, in this case my cache folder: /home/bsc1933/public_html/cache
I found the original script with Google-fu and just edited the email to match mine, so my knowledge of actually editing the script itself is non-existent. Could someone help me?
Simplest answer is as follows, by adding "grep" to filter out path you wish to exclude, between "find" and "mail" command.
`find /home/bsc1933 -type f -ctime -1 -exec ls -ls {} \; | grep -v '/home/bsc1933/public_html/cache' |` ... your code to send email ...
Let me know if you face any other difficulties.
Try using grep or find to ignore entries with a certain pattern.
With grep:
find /home/bsc1933 -type f -ctime -1 -exec ls -ls {} \; |grep -Ev "/home/bsc1933/public_html/cache" | mail -E -s “File Changes, Past 24 Hours” myemail
Or telling find to ignore it:
find /home/bsc1933 ! -path '*/.cache/*' -type f -ctime -1 -exec ls -ls {} \; | mail -E -s “File Changes, Past 24 Hours” myemail
I have a computing cluster with four nodes A, B, C and D and Slurm Version 17.11.7. I am struggling with Slurm array jobs. I have the following bash script:
#!/bin/bash -l
#SBATCH --job-name testjob
#SBATCH --output output_%A_%a.txt
#SBATCH --error error_%A_%a.txt
#SBATCH --nodes=1
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=50000
FOLDER=/home/user/slurm_array_jobs/
mkdir -p $FOLDER
cd ${FOLDER}
echo $SLURM_ARRAY_TASK_ID > ${SLURM_ARRAY_TASK_ID}
The script generates the following files:
output_*txt,
error_*txt,
files named according to ${SLURM_ARRAY_TASK_ID}
I run the bash script on my computing cluster node A as follows
sbatch --array=1-500 example_job.sh
The 500 jobs are distributed among nodes A-D. Also, the output files are stored on the nodes A-D, where the corresponding array job has run. In this case, for example, approximately 125 "output_" files are separately stored on A, B, C and D.
Is there a way to store all output files on the node where I submit the script, in this case, on node A? That is, I like to store all 500 "output_" files on node A.
Slurm does not handle input/output files transfer and assumes that the current working directory is a network filesystem such as for instance NFS for the simplest case. But GlusterFS, BeeGFS, or Lustre are other popular choices for Slurm.
Use an epilog script to copy the outputs back to where the script was submitted, then delete them.
Add to slurm.conf:
Epilog=/etc/slurm-llnl/slurm.epilog
The slurm.epilog script does the copying (make this executable by chmod +x):
#!/bin/bash
userId=`scontrol show job ${SLURM_JOB_ID} | grep -i UserId | cut -f2 -d '=' | grep -i -o ^[^\(]*`
stdOut=`scontrol show job ${SLURM_JOB_ID} | grep -i StdOut | cut -f2 -d '='`
stdErr=`scontrol show job ${SLURM_JOB_ID} | grep -i StdErr | cut -f2 -d '='`
host=`scontrol show job ${SLURM_JOB_ID} | grep -i AllocNode | cut -f3 -d '=' | cut -f1 -d ':'`
hostDir=`scontrol show job ${SLURM_JOB_ID} | grep -i Command | cut -f2 -d '=' | xargs dirname`
hostPath=$host:$hostDir/
runuser -l $userId -c "scp $stdOut $stdErr $hostPath"
rm -rf $stdOut
rm -rf $stdErr
(Switching from PBS to Slurm without NFS or similar shared directories is a pain.)
Is it possible to redirect output of a file to ls.
Example: file has a / in it and I want to direct that to ls to get the content of the / directory. When I try ls < file this does not work
You can use xargs or command substitution to achieve this.
Use xargs
The key is knowledge of the xargs command.
If you have a list of files in a file called files_to_change, you
can print them with the following one liner:
cat files_to_change | xargs ls
Use command substitution
An alternate method is to use command substitution. This works
the same as above.
Two different one-liners, using different syntax:
ls `cat files_to_change`
ls $(cat files_to_change)
It won't matter if they are files or directories, and you can run any command on them.
If the contents of file_to_change was:
/usr/bin/
/bin/
cat files_to_change | xargs ls and ls $(cat files_to_change) would be equivalent to running:
$ ls /usr/bin/
$ ls /bin/
The output on the console should be what you want.
You need to use xargs. This very useful utility runs a command with arguments which are passed in as input:
cat myfile | xargs ls
ls 'cat your_file'
beware, the ' is the altGr+7 char (but used for formatting here !)
How do I find out the files in the current directory which do not contain the word foo (using grep)?
If your grep has the -L (or --files-without-match) option:
$ grep -L "foo" *
You can do it with grep alone (without find).
grep -riL "foo" .
This is the explanation of the parameters used on grep
-L, --files-without-match
each file processed.
-R, -r, --recursive
Recursively search subdirectories listed.
-i, --ignore-case
Perform case insensitive matching.
If you use l (lowercased) you will get the opposite (files with matches)
-l, --files-with-matches
Only the names of files containing selected lines are written
Take a look at ack. It does the .svn exclusion for you automatically, gives you Perl regular expressions, and is a simple download of a single Perl program.
The equivalent of what you're looking for should be, in ack:
ack -L foo
The following command gives me all the files that do not contain the pattern foo:
find . -not -ipath '.*svn*' -exec grep -H -E -o -c "foo" {} \; | grep 0
The following command excludes the need for the find to filter out the svn folders by using a second grep.
grep -rL "foo" ./* | grep -v "\.svn"
If you are using git, this searches all of the tracked files:
git grep -L "foo"
and you can search in a subset of tracked files if you have ** subdirectory globbing turned on (shopt -s globstar in .bashrc, see this):
git grep -L "foo" -- **/*.cpp
You will actually need:
find . -not -ipath '.*svn*' -exec grep -H -E -o -c "foo" {} \; | grep :0\$
I had good luck with
grep -H -E -o -c "foo" */*/*.ext | grep ext:0
My attempts with grep -v just gave me all the lines without "foo".
Problem
I need to refactor a large project which uses .phtml files to write out HTML using inline PHP code. I want to use Mustache templates instead. I want to find any .phtml giles which do not contain the string new Mustache as these still need to be rewritten.
Solution
find . -iname '*.phtml' -exec grep -H -E -o -c 'new Mustache' {} \; | grep :0$ | sed 's/..$//'
Explanation
Before the pipes:
Find
find . Find files recursively, starting in this directory
-iname '*.phtml' Filename must contain .phtml (the i makes it case-insensitive)
-exec 'grep -H -E -o -c 'new Mustache' {}' Run the grep command on each of the matched paths
Grep
-H Always print filename headers with output lines.
-E Interpret pattern as an extended regular expression (i.e. force grep
to behave as egrep).
-o Prints only the matching part of the lines.
-c Only a count of selected lines is written to standard output.
This will give me a list of all file paths ending in .phtml, with a count of the number of times the string new Mustache occurs in each of them.
$> find . -iname '*.phtml$' -exec 'grep -H -E -o -c 'new Mustache' {}'\;
./app/MyApp/Customer/View/Account/quickcodemanagestore.phtml:0
./app/MyApp/Customer/View/Account/studio.phtml:0
./app/MyApp/Customer/View/Account/orders.phtml:1
./app/MyApp/Customer/View/Account/banking.phtml:1
./app/MyApp/Customer/View/Account/applycomplete.phtml:1
./app/MyApp/Customer/View/Account/catalogue.phtml:1
./app/MyApp/Customer/View/Account/classadd.phtml:0
./app/MyApp/Customer/View/Account/orders-trade.phtml:0
The first pipe grep :0$ filters this list to only include lines ending in :0:
$> find . -iname '*.phtml' -exec grep -H -E -o -c 'new Mustache' {} \; | grep :0$
./app/MyApp/Customer/View/Account/quickcodemanagestore.phtml:0
./app/MyApp/Customer/View/Account/studio.phtml:0
./app/MyApp/Customer/View/Account/classadd.phtml:0
./app/MyApp/Customer/View/Account/orders-trade.phtml:0
The second pipe sed 's/..$//' strips off the final two characters of each line, leaving just the file paths.
$> find . -iname '*.phtml' -exec grep -H -E -o -c 'new Mustache' {} \; | grep :0$ | sed 's/..$//'
./app/MyApp/Customer/View/Account/quickcodemanagestore.phtml
./app/MyApp/Customer/View/Account/studio.phtml
./app/MyApp/Customer/View/Account/classadd.phtml
./app/MyApp/Customer/View/Account/orders-trade.phtml
When you use find, you have two basic options: filter results out after find has completed searching or use some built in option that will prevent find from considering those files and dirs matching some given pattern.
If you use the former approach on a high number of files and dirs. You will be using a lot of CPU and RAM just to pass the result on to a second process which will in turn filter out results by using a lot of resources as well.
If you use the -not keyword which is a find argument, you will be preventing any path matching the string on the -name or -regex argument behind from being considered, which will be much more efficient.
find . -not -regex ".*/foo/.*" -regex ".*"
Then, any path that is not filtered out by -not will be captured by the subsequent -regex arguments.
For completeness the ripgrep version:
rg --files-without-match "pattern"
You can combine with file type and search path, e.g.
rg --files-without-match -t ruby "frozen_string_literal: true" app/
another alternative when grep doesn't have the -L option (IBM AIX for example), with nothing but grep and the shell :
for file in * ; do grep -q 'my_pattern' $file || echo $file ; done
My grep does not have any -L option. I do find workaround to achieve this.
The ideas are :
to dump all the file name containing the deserved string to a txt1.txt.
dump all the file name in the directory to a txt2.txt.
make the difference between the 2 dump file with diff command.
grep 'foo' *.log | cut -c1-14 | uniq > txt1.txt
grep * *.log | cut -c1-14 | uniq > txt2.txt
diff txt1.txt txt2.txt | grep ">"
find *20161109* -mtime -2|grep -vwE "(TRIGGER)"
You can specify the filter under "find" and the exclusion string under "grep -vwE". Use mtime under find if you need to filter on modified time too.
Open bug report
As commented by #tukan, there is an open bug report for Ag regarding the -L/--files-without-matches flag:
ggreer/the_silver_searcher: #238 - --files-without-matches does not work properly
As there is little progress to the bug report, the -L option mentioned below should not be relied on, not as long as the bug has not been resolved. Use different approaches presented in this thread instead. Citing a comment for the bug report [emphasis mine]:
Any updates on this? -L completely ignores matches on the first line of the file. Seems like if this isn't going to be fixed soon, the flag should be removed entirely, as it effectively does not work as advertised at all.
The Silver Searcher - Ag (intended function - see bug report)
As a powerful alternative to grep, you could use the The Silver Searcher - Ag:
A code searching tool similar to ack, with a focus on speed.
Looking at man ag, we find the -L or --files-without-matches option:
...
OPTIONS
...
-L --files-without-matches
Only print the names of files that don´t contain matches.
I.e., to recursively search for files that do not match foo, from current directory:
ag -L foo
To only search current directory for files that do not match foo, simply specify --depth=0 for the recursion:
ag -L foo --depth 0
This may help others. I have mix of files Go and with test files. But I only need .go files. So I used
ls *.go | grep -v "_test.go"
-v, --invert-match select non-matching lines see https://stackoverflow.com/a/3548465
Also one can use this with vscode to open all the files from terminal
code $(ls *.go | grep -v "_test.go")
grep -irnw "filepath" -ve "pattern"
or
grep -ve "pattern" < file
above command will give us the result as -v finds the inverse of the pattern being searched
The following command could help you to filter the lines which include the substring "foo".
cat file | grep -v "foo"