How can I append the following code to the end of numerous php files in a directory and its sub directory:
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
I have tried with:
echo "my text" >> *.php
But the terminal displays the error:
bash : *.php: ambiguous redirect
I usually use tee because I think it looks a little cleaner and it generally fits on one line.
echo "my text" | tee -a *.php
You don't specify the shell, you could try the foreach command. Under tcsh (and I'm sure a very similar version is available for bash) you can say something like interactively:
foreach i (*.php)
foreach> echo "my text" >> $i
foreach> end
$i will take on the name of each file each time through the loop.
As always, when doing operations on a large number of files, it's probably a good idea to test them in a small directory with sample files to make sure it works as expected.
Oops .. bash in error message (I'll tag your question with it). The equivalent loop would be
for i in *.php
do
echo "my text" >> $i
done
If you want to cover multiple directories below the one where you are you can specify
*/*.php
rather than *.php
BashFAQ/056 does a decent job of explaining why what you tried doesn't work. Have a look.
Since you're using bash (according to your error), the for command is your friend.
for filename in *.php; do
echo "text" >> "$filename"
done
If you'd like to pull "text" from a file, you could instead do this:
for filename in *.php; do
cat /path/to/sourcefile >> "$filename"
done
Now ... you might have files in subdirectories. If so, you could use the find command to find and process them:
find . -name "*.php" -type f -exec sh -c "cat /path/to/sourcefile >> {}" \;
The find command identifies what files using conditions like -name and -type, then the -exec command runs basically the same thing I showed you in the previous "for" loop. The final \; indicates to find that this is the end of arguments to the -exec option.
You can man find for lots more details about this.
The find command is portable and is generally recommended for this kind of activity especially if you want your solution to be portable to other systems. But since you're currently using bash, you may also be able to handle subdirectories using bash's globstar option:
shopt -s globstar
for filename in **/*.php; do
cat /path/to/sourcefile >> "$filename"
done
You can man bash and search for "globstar" for more details about this. This option requires bash version 4 or higher.
NOTE: You may have other problems with what you're doing. PHP scripts don't need to end with a ?>, so you might be adding HTML that the script will try to interpret as PHP code.
You can use sed combined with find. Assume your project tree is
/MyProject/
/MyProject/Page1/file.php
/MyProject/Page2/file.php
etc.
Save the code you want to append on /MyProject/. Call it append.txt
From /MyProject/ run:
find . -name "*.php" -print | xargs sed -i '$r append.txt'
Explain:
find does as it is, it looks for all .php, including subdirectories
xargs will pass (i.e. run) sed for all .php that have just been found
sed will do the appending. '$r append.txt' means go to the end of the file ($) and write (paste) whatever is in append.txt there. Don't forget -i otherwise it will just print out the appended file and not save it.
Source: http://www.grymoire.com/unix/Sed.html#uh-37
You can do (Work even if there's space in your file path) :
#!/bin/bash
# Create a tempory file named /tmp/end_of_my_php.txt
cat << EOF > /tmp/end_of_my_php.txt
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
EOF
find . -type f -name "*.php" | while read the_file
do
echo "Processing $the_file"
#cp "$the_file" "${the_file}.bak" # Uncomment if you want to save a backup of your file
cat /tmp/end_of_my_php.txt >> "$the_file"
done
echo
echo done
PS: You must run the script from the directory you want to browse
Inspired from #Dantastic answer :
echo "my text" | tee -a file1.txt | tee -a file2.txt
Related
I have a directory with several hundred .log files in it, and I have a script to pull some info out of them and print it to an existing file. Running it on one file goes like
awk -f HLGcheck.sh 1-1-1.log >> outputs.txt
and this works fine. I've looked around for several hours online and I can't seem to find a decent way to have it run on all .log files in the directory. Any help from people smarter than me would be appreciated.
Some techniques:
If the awk script can only handle one file at a time, use a for loop as shown or
find . -name '*.log' -exec awk -f HLGcheck.sh '{}' \; >> outputs.txt
If the awk script can handle multiple files:
awk -f HLGcheck.sh *.log >> outputs.txt
find . -name '*.log' -exec awk -f HLGcheck.sh '{}' \+ >> outputs.txt
bash has for loop for this purpose
$ for f in *.log; do your_processing_here; done
you can refer to the file in processing as $f
I have a situation where I need to keep .tgz files & if they've been extracted, remove the extracted directory & contents.
In all examples, the only top-level directory within the tarball has a different name than the tarball itself:
[host1]$ find / -name "*\#*.tgz" #(has an # symbol somewhere in the name)
/1-#-test.tgz
[host1]$ tar -tzvf /1-#-test.tgz | head -n 1 | awk '{ print $6 }'
TJ #(directory name)
What I'd like to accomplish (pulling my hair out; rusty scripting fingers), is to look at each tarball, see if the corresponding directory name (like above) exists. If it does, echo "rm -rf /directoryname" into an output file for review.
I can read all of the tarballs into an array ... but how to check the directories?
Frustrated & appreciate any help.
Maybe you're looking for something like this:
find / -name "*#*.tgz" | while read line; do
dir=$(tar ztf "$line" | awk -F/ '{print $6; exit}')
test -d "$dir" && echo "rm -fr '$dir'"
done
Explanation:
We iterate over the *#*.tgz files found with a while loop, line by line
Get the list of files in the tgz file with tar ztf "$line"
Since paths are separated by /, use that as the separator in the awk, print the 6th field. After the print we exit, making this equivalent to but more efficient than using head -n1 first
With dir=$(...) we put the entire output of the tar..awk chain, thus the 6th field of the first file in the tar, into the variable dir
We check if such directory exists, if yes then echo an rm command so you can review and execute later if looks good
My original answer used a find ... -exec but I think that's not so good in this particular case:
find / -name "*#*.tgz" -exec \
sh -c 'dir=$(tar ztf "{}" | awk -F/ "{print \$6; exit}");\
test -d "$dir" && echo "rm -fr \"$dir\""' \;
It's not so good because of running sh for every file, and since we are using {} in the subshell, we lose the usual benefits of a typical find ... -exec where special characters in {} are correctly handled.
I've been trying to figure this one out for a while. I've read through multiple threads, and feel like I'm close, but the script just isn't coming together.
Scenario:
I have a media server and thousands of movie files. Each movie file has various accessory files such as the Cover artwork, Database info, Fanart, and trailer. While everything in the directory has it's coverart and database info, some files may or may not have their respective fanart or trailer. For these files I'm trying to get this script working which will create an empty "dummy" file in place of the file that should be there. Then when I actually have the time I can go back and search out just the dummy files and work to fill in the gaps where I can.
Here is what I have so far.
#!/bin/bash
find . -type f -print0 | while read -d $'\0' movie ;
do
echo $movie
moviename=${movie%\.*} #remove the extension from the string
moviename1=`echo $moviename | sed 's/\ /\\ /'` #add escaped spaces to the string
echo $moviename1 #echo the string (for debugging)
if [ ! -f $moviename-fanart* ]; #because the fanart could be .jpg, or .png, etc
then
echo "Creating $moviename-fanart.dummy"
touch "$moviename-fanart.dummy"
fi
if [ ! -f $moviename-trailer* ]; #because tralers could be .mp4, .mov, .mkv, .avi, etc
then
echo "Creating $moviename-trailer.dummy"
touch "$moviename-trailer.dummy"
fi
done
This should be pretty simple, but I think that I'm not getting the proper formating for the input string going into the test operators.
Any help would be greatly appreciated.
Thanks
Line-by-line analysis:
find . -type f -print0 | while read -d $'\0' movie; do
OK, but with bash4 you can just use shopt -s globstar to operate recursively on a directory.
moviename=${movie%\.*} #remove the extension from the string
You don't need the backslash.
moviename1=`echo $moviename | sed 's/\ /\\ /'` #add escaped spaces to the string
This line is suspect because if you quote the name, escaped spaces become doubly-escaped. You're confusing the value of the string with the representation you see of it.
if [ ! -f $moviename-fanart* ]; then #because the fanart could be .jpg, or .png, etc
Quote the string or use bash's [[ test keyword. It's a little dangerous to expand a glob inside the test expression because if it matches multiple results you'll get an error. That said, if you're sure there can be only one you can quote up to the glob. "$moviename-fanart"*.
touch "$moviename-fanart.dummy"
Here, you quote it. So essentially you're dealing with a different name now.
fi
if [ ! -f $moviename-trailer* ]; then #because tralers could be .mp4, .mov, .mkv, .avi, etc
echo "Creating $moviename-trailer.dummy"
touch "$moviename-trailer.dummy"
fi
Same thing.
done
I have a folder with multiple sub-folders and each sub-folder contains 10-15 files. I want to perform a certain operation only on the text files in these folders. The folders contain other types of files as well. For now, I am just trying to write a simple for loop to access every file.
for /r in *.txt; do "need to perform this on every file"; done
This gives me an error -bash: ``/R': not a valid identifier
Thanks for the help.
P.S I am using cygwin on Win 7.
Your /r is the problem, that's not a valid identifier (as bash said, you need to drop the /). Also, this won't recurse into subdirectories. If your operation is simple, you can directly use the exec option of find. {} is a placeholder for the filename.
find . -name "*.txt" -exec ls -l {} \;
Otherwise, try something like
for r in $( find . -name "*.txt" ) ; do
echo $r
#more actions...
done
With bash:
shopt -s globstar
for file in **/*.txt; do ...
I would use "find" for your application case
Something like
find . -name "*.txt" -exec doSomeThing {} \;
Need to process files in current directory one at a time. I am looking for a way to take the output of ls or find and store the resulting value as elements of an array. This way I can manipulate the array elements as needed.
To answer your exact question, use the following:
arr=( $(find /path/to/toplevel/dir -type f) )
Example
$ find . -type f
./test1.txt
./test2.txt
./test3.txt
$ arr=( $(find . -type f) )
$ echo ${#arr[#]}
3
$ echo ${arr[#]}
./test1.txt ./test2.txt ./test3.txt
$ echo ${arr[0]}
./test1.txt
However, if you just want to process files one at a time, you can either use find's -exec option if the script is somewhat simple, or you can do a loop over what find returns like so:
while IFS= read -r -d $'\0' file; do
# stuff with "$file" here
done < <(find /path/to/toplevel/dir -type f -print0)
for i in `ls`; do echo $i; done;
can't get simpler than that!
edit: hmm - as per Dennis Williamson's comment, it seems you can!
edit 2: although the OP specifically asks how to parse the output of ls, I just wanted to point out that, as the commentators below have said, the correct answer is "you don't". Use for i in * or similar instead.
You actually don't need to use ls/find for files in current directory.
Just use a for loop:
for files in *; do
if [ -f "$files" ]; then
# do something
fi
done
And if you want to process hidden files too, you can set the relative option:
shopt -s dotglob
This last command works in bash only.
Depending on what you want to do, you could use xargs:
ls directory | xargs cp -v dir2
For example. xargs will act on each item returned.