for looping over an array - arrays

I am trying to resolve an issue with a bash script that is intended to search through each users home directory in /Users/ and find two different directories, stored in array "SUBDIRS." If these directories exist, I want to remove with the recursive and force options. If they do not exist I want the script to continue looking for the next directory, next home folder, etc.
#!/bin/sh
err=0
SUBDIRS=(
"Library/Application Support/Spotify"
"Library/Caches/com.spotify.client"
)
for HOMEDIR in /Users/*; do
for SUBDIR in ${SUBDIRS}; do
DIR="${HOMEDIR}/${SUBDIR}"
if [[ -d "${DIR}" ]]; then
rm -rf "${DIR}"
echo "${HOMEDIR}/${SUBDIR} has been removed."
APP=$(find "${HOMEDIR}" -name [sS]potify.app)
rm -rf "${APP}"
fi
done
done
exit $err

You need to signify that it's an array to be expanded (and quote it).
for SUBDIR in "${SUBDIRS[#]}"; do
You should quote the pattern in the find command so find will expand it instead of the shell.
APP=$(find "${HOMEDIR}" -name '[sS]potify.app')

Related

How to check availability of each directoriy (directory tree) in a file path. Unix/Bash

Given a text file with paths (on each line a new path).
Need to create directories from paths. This is done easily like mkdir -p is/isapp/ip/ipapp. Then chgrp group1 is/isapp/ip/ipapp. But the problem is that access only changes for the final ipapp directory. And you need to change access for all newly created directories, while not changing access for directories that already existed before the mkdir -p command. Therefore, you need to check which directories already exist and change permissions only for newly created directories. Below I tried to split the path from file and gradually increase it until the moment when the search does not find the directory. And then chgrp -R with the path to the directory that was not found. Below are my code sketches. I would be grateful for any help.
#!/bin/bash
FILE=$1 /(file with paths(in each line new path))
while read LINE; do
IFS='/' read -ra my_array <<< "$my_string"
if ! [ -d "${my_array[0]}" ]; then
mkdir -p "${my_array[0]}"
else -d "${my_array[0]}"/"${my_array[#]}"
done
fi
Something like this would work: (basically for each directory level try to cd up, and if you can't create the directory with the proper permissions).
#!/bin/bash
MODE=u+rw
ROOT=$(pwd)
while read PATH; do
IFS='/' read -r -a dirs <<< "${PATH}"
for dir in "${dirs[#]}"; do
[ -d "${dir}" ] || mkdir "${dir}" -m ${MODE} || exit 1
cd "${dir}"
done
cd "${ROOT}"
done
Note: this is reading from stdin (so you would have to pipe your file into the script), or alternatively add < ${FILE} right after the done to pipe it in manually). The quotes around the "${dir}" and "${dirs[#]}" are required in case there are any whitespace characters in the filenames.
The exit 1 saves you in case the mkdir fails (say there's a file with the name of the directory you want to create).

Script to group numbered files into folders

I have around a million files in one folder in the form xxxx_description.jpg where xxx is a number ranging from 100 to an unknown upper.
The list is similar to this:
146467_description1.jpg
146467_description2.jpg
146467_description3.jpg
146467_description4.jpg
14646_description1.jpg
14646_description2.jpg
14646_description3.jpg
146472_description1.jpg
146472_description2.jpg
146472_description3.jpg
146500_description1.jpg
146500_description2.jpg
146500_description3.jpg
146500_description4.jpg
146500_description5.jpg
146500_description6.jpg
To get the file number down in the at folder I'd like to put them all into folders grouped by the number at the start.
ie:
146467/146467_description1.jpg
146467/146467_description2.jpg
146467/146467_description3.jpg
146467/146467_description4.jpg
14646/14646_description1.jpg
14646/14646_description2.jpg
14646/14646_description3.jpg
146472/146472_description1.jpg
146472/146472_description2.jpg
146472/146472_description3.jpg
146500/146500_description1.jpg
146500/146500_description2.jpg
146500/146500_description3.jpg
146500/146500_description4.jpg
146500/146500_description5.jpg
146500/146500_description6.jpg
I was thinking to try and use command line: find | awk {} | mv command or maybe write a script, but I'm not sure how to do this most efficiently.
If you really are dealing with millions of files, I suspect that a glob (*.jpg or [0-9]*_*.jpg may fail because it makes a command line that's too long for the shell. If that's the case, you can still use find. Something like this might work:
find /path -name "[0-9]*_*.jpg" -exec sh -c 'f="{}"; mkdir -p "/target/${f%_*}"; mv "$f" "/target/${f%_*}/"' \;
Broken out for easier reading, this is what we're doing:
find /path - run find, with /path as a starting point,
-name "[0-9]*_*.jpg" - match files that match this filespec in all directories,
-exec sh -c execute the following on each file...
'f="{}"; - put the filename into a variable...
mkdir -p "/target/${f%_*}"; - make a target directory based on that variable (read mkdir's man page about the -p option)
mv "$f" "/target/${f%_*}/"' - move the file into the directory.
\; - end the -exec expression
On the up side, it can handle any number of files that find can handle (i.e. limited only by your OS). On the down side, it's launching a separate shell for each file to be handled.
Note that the above answer is for Bourne/POSIX/Bash. If you're using CSH or TCSH as your shell, the following might work instead:
#!/bin/tcsh
foreach f (*_*.jpg)
set split = ($f:as/_/ /)
mkdir -p "$split[1]"
mv "$f" "$split[1]/"
end
This assumes that the filespec will fit in tcsh's glob buffer. I've tested with 40000 files (894KB) on one command line and not had a problem using /bin/sh or /bin/csh in FreeBSD.
Like the Bourne/POSIX/Bash parameter expansion solution above, this avoids unnecessary calls to external I haven't tested that, and would recommend the find solution even though it's slower.
You can use this script:
for i in [0-9]*_*.jpg; do
p=`echo "$i" | sed 's/^\([0-9]*\)_.*/\1/'`
mkdir -p "$p"
mv "$i" "$p"
done
Using grep
for file in *.jpg;
do
dirName=$(echo $file | grep -oE '^[0-9]+')
[[ -d $dirName ]] || mkdir $dirName
mv $file $dirName
done
grep -oE '^[0-9]+' extracts the starting digits in the filename as
146467
146467
146467
146467
14646
...
[[ -d $dirName ]] returns 1 if the directory exists
[[ -d $dirName ]] || mkdir $dirName ensures that the mkdir works only if the test [[ -d $dirName ]] fails, that is the direcotry does not exists

Looking to take only main folder name within a tarball & match it to folders to see if it's been extracted

I have a situation where I need to keep .tgz files & if they've been extracted, remove the extracted directory & contents.
In all examples, the only top-level directory within the tarball has a different name than the tarball itself:
[host1]$ find / -name "*\#*.tgz" #(has an # symbol somewhere in the name)
/1-#-test.tgz
[host1]$ tar -tzvf /1-#-test.tgz | head -n 1 | awk '{ print $6 }'
TJ #(directory name)
What I'd like to accomplish (pulling my hair out; rusty scripting fingers), is to look at each tarball, see if the corresponding directory name (like above) exists. If it does, echo "rm -rf /directoryname" into an output file for review.
I can read all of the tarballs into an array ... but how to check the directories?
Frustrated & appreciate any help.
Maybe you're looking for something like this:
find / -name "*#*.tgz" | while read line; do
dir=$(tar ztf "$line" | awk -F/ '{print $6; exit}')
test -d "$dir" && echo "rm -fr '$dir'"
done
Explanation:
We iterate over the *#*.tgz files found with a while loop, line by line
Get the list of files in the tgz file with tar ztf "$line"
Since paths are separated by /, use that as the separator in the awk, print the 6th field. After the print we exit, making this equivalent to but more efficient than using head -n1 first
With dir=$(...) we put the entire output of the tar..awk chain, thus the 6th field of the first file in the tar, into the variable dir
We check if such directory exists, if yes then echo an rm command so you can review and execute later if looks good
My original answer used a find ... -exec but I think that's not so good in this particular case:
find / -name "*#*.tgz" -exec \
sh -c 'dir=$(tar ztf "{}" | awk -F/ "{print \$6; exit}");\
test -d "$dir" && echo "rm -fr \"$dir\""' \;
It's not so good because of running sh for every file, and since we are using {} in the subshell, we lose the usual benefits of a typical find ... -exec where special characters in {} are correctly handled.

Shell command/script to delete files whose names are in a text file

I have a list of files in a .txt file (say list.txt). I want to delete the files in that list. I haven't done scripting before. Could some give the shell script/command I can use. I have bash shell.
while read -r filename; do
rm "$filename"
done <list.txt
is slow.
rm $(<list.txt)
will fail if there are too many arguments.
I think it should work:
xargs -a list.txt -d'\n' rm
Try this command:
rm -f $(<file)
If the file names have spaces in them, none of the other answers will work; they'll treat each word as a separate file name. Assuming the list of files is in list.txt, this will always work:
while read name; do
rm "$name"
done < list.txt
For fast execution on macOS, where xargs custom delimiter d is not possible:
<list.txt tr "\n" "\0" | xargs -0 rm
The following should work and leaves you room to do other things as you loop through.
Edit: Don't do this, see here: http://porkmail.org/era/unix/award.html
for file in $(cat list.txt); do rm $file; done
I was just looking for a solution to this today and ended up using a modified solution from some answers and some utility functions I have.
// This is in my .bash_profile
# Find
ffe () { /usr/bin/find . -name '*'"$#" ; } # ffe: Find file whose name ends with a given string
# Delete Gradle Logs
function delete_gradle_logs() {
(cd ~/.gradle; ffe .out.log | xargs -I# rm#)
}
On linux, you can try:
printf "%s\n" $(<list.txt) | xargs -I# rm #
In my case, my .txt file contained a list of items of the kind *.ext and worked fine.

Looping through sub folders not working Unix

I have a folder with multiple sub-folders and each sub-folder contains 10-15 files. I want to perform a certain operation only on the text files in these folders. The folders contain other types of files as well. For now, I am just trying to write a simple for loop to access every file.
for /r in *.txt; do "need to perform this on every file"; done
This gives me an error -bash: ``/R': not a valid identifier
Thanks for the help.
P.S I am using cygwin on Win 7.
Your /r is the problem, that's not a valid identifier (as bash said, you need to drop the /). Also, this won't recurse into subdirectories. If your operation is simple, you can directly use the exec option of find. {} is a placeholder for the filename.
find . -name "*.txt" -exec ls -l {} \;
Otherwise, try something like
for r in $( find . -name "*.txt" ) ; do
echo $r
#more actions...
done
With bash:
shopt -s globstar
for file in **/*.txt; do ...
I would use "find" for your application case
Something like
find . -name "*.txt" -exec doSomeThing {} \;

Resources