Replace string with sed in array and store as variable - arrays

If I hard code the csgo path my code works, but if I use a search funtion and replace the directory I searched for using sed my code fails.
#Find directorties of CSGO instances to update
updatepaths=`find /home/tcagame/ -type f -name "update_csgo.txt"`
#Splits diretories on space to be read from the array
updates=($updatepaths)
#Path to CSGO instances to update
#csgo="/home/tcagame/user/33/csgo/steam.inf"
#Creating automated path
csgo= echo "${updates[0]}" | sed 's,update_csgo.txt,csgo/steam.inf,'
#Check for updates
python $updatecheck $csgo > ~/autoupdate/status/updatestatus.txt
When I echo "$csgo" it creates a new line, I think thats why its not working.
/home/tcagame/user/33/csgo/steam.inf
[New Line]
This is what I am tryin to achieve in an automated style:
python srcupdatecheck /home/tcagame/iceman/206/csgo/steam.inf

Using mapfile to read the lines of find output into an array is safer than relying on word splitting: the only trouble you'll have is if a filename contains a newline character.
mapfile -t updates < <(find /home/tcagame/ -type f -name "update_csgo.txt")
Here, you only need parameter expansion, not sed:
csgo="${updates[0]%update_csgo.txt}csgo/steam.inf"
Or, let find do more of the heavy lifting for you:
mapfile -t update_dirs < <(
find /home/tcagame/ -type f -name "update_csgo.txt" -exec dirname '{}' \;
)
csgo="${update_dirs[0]}/csgo/steam.inf"

Related

Counting the number of files in a directory that contain the different variables in my array - bash script

I have a bash script, which needs to check certain files for certain variables, and count how many files come back containing those variables.
As there is more than one variable I need to look for I decided to to use an array for the variables.
The code I am using is below:
#!/bin/bash
declare -a MYARRAY=('Variable One' 'Variable Two' 'Variable Three');
COUNT_MYARRAY=$(find $DIRECTORY -mtime -1 -exec grep -ln $MYARRAY {} \; | wc -l)
I have declared the $DIRECTORY in my real script.
However, it does not seem to pick up files if they have the second and third variable within?
Can anyone see where I might be going wrong?
You can use greps regex support and pass multiple expressions using 'var1\|var2'. First construct the grep argument and then execute grep.
You don't need line numbers -n to grep to count the files...
grep can handle multiple files - it will be faster to pass multiple files to one grep with -exec ... +, rather then spawn grep for each file.
UPPER_CASE_VARIABLES are shouting at me and by convention upper vase variables are reserved for exported variables.
myarray=('Variable One' 'Variable Two' 'Variable Three')
arg=$(printf "%s\|" "${MYARRAY[#]}" | sed 's/\\|$//')
directory=.
count_myarray=$(find "$directory" -type f -mtime -1 -exec grep -l "$arg" {} + | wc -l)
Alternatively: you can pass multiple -exec arguments to find. So first from myarray construct arguments to find in the form -exec grep -l <the var>. Note that multiple variables can be in same files, so get unique filenames after grepping.
myarray=('Variable One' 'Variable Two' 'Variable Three');
findargs=()
for i in "${MYARRAY[#]}"; do
findargs+=(-exec grep -l "$i" {} +)
done
directory=.
count_myarray=$(find "$directory" -type f -mtime -1 "${findargs[#]}" | sort -u | wc -l)
or similar:
count_myarray=$(printf '-exec\0grep\0-l\0%s\0{}\0+\0' "${myarray[#]}" | xargs -0 find "$directory" -type f -mtime -1 | sort -u | wc -l)
Remember to quote your variable expansions to protect against whitespaces or special characters in filenames and directory names.
Going wrong:
With echo $MYARRAY you find Variable One, not the string you want for grep.
Also note that it is better to use lowercase for your variable names. I will use ${directory} and not $DIRECTORY (and in double quotes for directories with a space).
You have more options with grep. When you want a file with 8 occurances counted one, you can not use the grep option -c. An useful option is -r. You are looking for something like
grep -Erl "Variable One|Variable Two|Variable Three" | wc -l
This is difficult when the variables might have special characters like $or |.
Another option of grep is using the option
-f FILE, Obtain patterns from FILE, one per line
So you should make a function that writes the variables to a file, and use something like
grep -rlFf "myVariablesFile" "${directory}" | wc -l
When the content of the file is changing rapidly, you might want to avoid the temporary file with
grep -rlFf <(function_that_writes_variables_to_stdout) "${directory}"| wc -l
or directly
grep -rlFf <(printf "%s\n" "${var1}" "${var2}" "${var3}") "${directory}" | wc -l

sort elements read into array

While reading find results into an array I want them sorted at the same time (mp3's, so by track number, which is the first part of the file name), and thought something like this should do the trick:
mp3s=()
while read -r -d $'\0'; do
mp3s+=("$REPLY")
done < <(sort <(find "$mp3Dir" -type f -name '*.mp3' -print0))
but the elements in the array are never sorted correctly (by first part of file name which is mp3 track number: 01_..., 02_..., 03_..., etc.)
Although the following gets the job done, it seems unnecessarily awkward:
mp3s=()
while read -r -d $'\0'; do
mp3s+=("$REPLY")
done < <(find "$mp3Dir" -type f -name '*.mp3' -print0)
mp3s=( $(for f in "${mp3s[#]}" ; do
echo "$f"
done | sort) )
There must be a more streamlined way to get this done, along similar lines to what I was thinking in the first example, no? I have tried reading thru sort on both sides of the find command, using its numerous options for sorting (-n, -d, etc.) but without any luck (so far).
So, is there a more efficient way to incorporate a sort command while the array is initially being populated?
By default, sort assumes newline-separated records. The call to find, however, specifies nul-separated output. The solution is to add the -z flag to sort. This tells sort to expect nul-separated input and produce nul-separated output. Thus, try:
mp3s=()
while read -r -d $'\0'; do
mp3s+=("$REPLY")
done < <(sort -z <(find "$mp3Dir" -type f -name '*.mp3' -print0))
Example
Suppose that we have these mp3 files:
$ find "." -type f -name '*.mp3' -print0
./music1/d b2.mp3./music1/a b1.mp3./music1/a b2.mp3./music1/d b1.mp3./music1/a b3.mp3./music1/d b3.mp3
First, try sort:
$ sort <(find "." -type f -name '*.mp3' -print0)
./music1/d b2.mp3./music1/a b1.mp3./music1/a b2.mp3./music1/d b1.mp3./music1/a b3.mp3./music1/d b3.mp3
The files remain unordered.
Now, try sort -z:
$ sort -z <(find "." -type f -name '*.mp3' -print0)
./music1/a b1.mp3./music1/a b2.mp3./music1/a b3.mp3./music1/d b1.mp3./music1/d b2.mp3./music1/d b3.mp3
The files are now in order.
One way to do do the sorting internally to bash is to use an associative array and put your data in keys, rather than values.
declare -A mp3s=()
while IFS= read -r -d ''; do
mp3s[$REPLY]=1
done < <(find "$mp3Dir" -type f -name '*.mp3' -print0)
...then, to iterate over the values:
for mp3 in "${!mp3s[#]}"; do
printf '%q\n' "$mp3"
done
As associative arrays are a feature added in bash 4.0, note that this functionality isn't available in 3.2 (which is still in use in some circles, most particularly MacOS).

Move files containing X but not containing Y

To manage my backup sync folder, I am trying to come up with a command that would move files beginning with string1* but NOT ending with *string2 from /folder1 to /folder2
What would a command containing such two opposite conditions (HAS and HAS NOT) look like?
#!/bin/bash
for i in `ls -d /folder1/string1* | grep -v 'string2$'`
do
ls -ld $i | grep '^-' > /dev/null # Test that we have a regular file and not a directory etc.
if [ $? == 0 ]; then
mv $i /folder2
fi
done
Try something like
find /folder1 -mindepth 1 -maxdepth 1 -type f \
-name 'string1*' \! -name '*string2' -exec cp -iv {} /folder2 +
Note: If your have a older version of find you can replace + with \;
To me this is another case for (what I shall denote) the read while pattern.
cd /folder1
ls string1* | grep -v 'string2$' | while read f; do mv $f /folder2; done
The other answers are good alternatives, and in particular, find can do a lot. But I always get a headache using find, and never quite use it enough to do so without the manpage open.
Also, starting with ls or a simple find to get a list of files, and then using any or all of sed, awk, grep or whatever you have to hand, to adjust/trim/extend this list, and then bunging it into a loop, is a crude(ish) but pretty powerful technique.

remove blank first line script

I have this script which is printing out the files that have the first line blank:
for f in `find . -regex ".*\.php"`; do
for t in head; do
$t -1 $f |egrep '^[ ]*$' >/dev/null && echo "blank line at the $t of $f";
done;
done
How can I improve this to actually remove the blank line too, or at least copy all the files with the blank first line somewhere else.
I tried copying using this, which is good, because it copies preserving the directory structure, but it was copying every php file, and I needed to capture the postive output of the egrep and only copy those files.
rsync -R $f ../DavidSiteBlankFirst/
I would use sed personally
find ./ -type f -regex '.*\.php' -exec sed -i -e '1{/^[[:blank:]]*$/d;}' '{}' \;
this finds all the regular files ending in .php and executes the sed command which works on the first line only and checks to see if its blank and deletes it if it is, other blank lines in the file remain unaffected.
Just using find and sed:
find . -type f -name "*.php" -exec sed -i '1{/^\s*$/d;q;}' {} \;
The -type f option only find files, not that I expect you would name folders with a .php suffix but it's good practice. The use of -regex '.*\.php' is overkill and messier just using globbing -name "*.php". Use find's -exec instead of a shell script, the sed script will operate on each matching file passed by find.
The sed script looks at the first line only 1 and applies the operations inside {} to that line. We check if the line is blank /^\s*$/ if the line matches we delete d it and quit q the script so not to read all the other lines in the file. The -i option saves the change back to the file as the default behaviour of sed is to print to stdout. If you want back files making use -i~ instead, this will create a backfile file~ for file.

How do I capture the output from the ls or find command to store all file names in an array?

Need to process files in current directory one at a time. I am looking for a way to take the output of ls or find and store the resulting value as elements of an array. This way I can manipulate the array elements as needed.
To answer your exact question, use the following:
arr=( $(find /path/to/toplevel/dir -type f) )
Example
$ find . -type f
./test1.txt
./test2.txt
./test3.txt
$ arr=( $(find . -type f) )
$ echo ${#arr[#]}
3
$ echo ${arr[#]}
./test1.txt ./test2.txt ./test3.txt
$ echo ${arr[0]}
./test1.txt
However, if you just want to process files one at a time, you can either use find's -exec option if the script is somewhat simple, or you can do a loop over what find returns like so:
while IFS= read -r -d $'\0' file; do
# stuff with "$file" here
done < <(find /path/to/toplevel/dir -type f -print0)
for i in `ls`; do echo $i; done;
can't get simpler than that!
edit: hmm - as per Dennis Williamson's comment, it seems you can!
edit 2: although the OP specifically asks how to parse the output of ls, I just wanted to point out that, as the commentators below have said, the correct answer is "you don't". Use for i in * or similar instead.
You actually don't need to use ls/find for files in current directory.
Just use a for loop:
for files in *; do
if [ -f "$files" ]; then
# do something
fi
done
And if you want to process hidden files too, you can set the relative option:
shopt -s dotglob
This last command works in bash only.
Depending on what you want to do, you could use xargs:
ls directory | xargs cp -v dir2
For example. xargs will act on each item returned.

Resources