Putting files in directory into array variable - arrays

I'm writing bash code that will search for specific files in the directory it is run in and add them into an array variable. The problem I am having is formatting the results. I need to find all the compressed files in the current directory and display both the names and sizes of the files in order of last modified. I want to take the results of that command and put them into an array variable with each line element containing the file's name and corresponding size but I don't know how to do that. I'm not sure if I should be using command "find" instead of "ls" but here is what I have so far:
find_files="$(ls -1st --block-size=MB)"
arr=( ($find_files) )

I'm not sure exactly what format you want the array to be in, but here is a snippet that creates an associative array keyed by filename with the size as the value:
$ ls -l test.{zip,bz2}
-rw-rw-r-- 1 user group 0 Sep 10 13:27 test.bz2
-rw-rw-r-- 1 user group 0 Sep 10 13:26 test.zip
$ declare -A sizes; while read SIZE FILENAME ; do sizes["$FILENAME"]="$SIZE"; done < <(find * -prune -name '*.zip' -o -name *.bz2 | xargs stat -c "%Y %s %N" | sort | cut -f 2,3 -d " ")
$ echo "${sizes[#]#A}"
declare -A sizes=(["'test.zip'"]="0" ["'test.bz2'"]="0" )
$
And if you just want an array of literally "filename size" entries, that's even easier:
$ while read SIZE FILENAME ; do sizes+=("$FILENAME $SIZE"); done < <(find * -prune -name '*.zip' -o -name *.bz2 | xargs stat -c "%Y %s %N" | sort | cut -f 2,3 -d " ")
$ echo "${sizes[#]#A}"
declare -a sizes=([0]="'test.zip' 0" [1]="'test.bz2' 0")
$

Both of these solutions work, and were tested via copy paste from this post.
The first is fairly slow. One problem is external program invocations within a loop - date for example, is invoked for every file. You could make it quicker by not including the date in the output array (see Notes below). Particularly for method 2 - that would result in no external command invocations inside the while loop. But method 1 is really the problem - orders of magnitude slower.
Also, somebody probably knows how to convert an epoch date to another format in awk for example, which could be faster. Maybe you could do the sort in awk too. Perhaps just keep the epoch date?
These solutions are bash / GNU heavy and not portable to other environments (bash here strings, find -printf). OP tagged linux and bash though, so GNU can be assumed.
Solution 1 - capture any compressed file - using file to match (slow)
The criteria for 'compressed' is if file output contains the word compress
Reliable enough, but perhaps there is a conflict with some other file type description?
file -l | grep compress (file 5.38, Ubuntu 20.04, WSL) indicates for me there are no conflicts at all (all files listed are compression formats)
I couldn't find a way of classifying any compressed file other than this
I ran this on a directory containing 1664 files - time (real) was 40 seconds
#!/bin/bash
# Capture all files, recursively, in $TARGET, that are
# compressed files. In an indexed array. Using file name
# extensions to match.
# Initialise variables, and check the target is valid
declare -g c= compressed_files= path= TARGET=$1
[[ -r "$TARGET" ]] || exit 1
# Make the array
# A here string (<<<) must be used, to keep array in the global environment
while IFS= read -r -d '' path; do
[[ "$(file --brief "${path%% *}")" == *compress* ]] &&
compressed_files[c++]="${path% *} $(date -d #${path##* })"
done < \
<(
find "$TARGET" -type f -printf '%p %s %T#\0' |
awk '{$2 = ($2 / 1024); print}' |
sort -n -k 3
)
# Print results - to test
printf '%s\n' "${compressed_files[#]}"
Solution 2 - use file extensions - orders of magnitude faster
If you know exactly what extensions you are looking for, you can
compose them in a find command
This is alot faster
On the same directory as above, containing 1664 files - time (real) was 200 miliseconds
This example looks for .gz, .zip, and .7z (gzip, zip and 7zip respectively)
I'm not sure if -type f -and -regex '.*[.]\(gz\|zip\|7z\) -and printf may be faster again, now I think of it. I started with globs cause I assumed that was quicker
That may also allow for storing the extension list in a variable..
This method avoids a file analysis on every file in your target
It also makes the while loop shorter - you're only iterating matches
Note the repetition of -printf here, this is due to the logic that
find uses: -printf is 'True'. If it were included by itself, it would
act as a 'match' and print all files
It has to be used as a result of a name match being true (using -and)
Perhaps somebody has a better composition?
#!/bin/bash
# Capture all files, recursively, in $TARGET, that are
# compressed files. In an indexed array. Using file name
# extensions to match.
# Initialise variables, and check the target is valid
declare -g c= compressed_files= path= TARGET=$1
[[ -r "$TARGET" ]] || exit 1
while IFS= read -r -d '' path; do
compressed_files[c++]="${path% *} $(date -d #${path##* })"
done < \
<(
find "$TARGET" \
-type f -and -name '*.gz' -and -printf '%p %s %T#\0' -or \
-type f -and -name '*.zip' -and -printf '%p %s %T#\0' -or \
-type f -and -name '*.7z' -and -printf '%p %s %T#\0' |
awk '{$2 = ($2 / 1024); print}' |
sort -n -k 3
)
# Print results - for testing
printf '%s\n' "${compressed_files[#]}"
Sample output (of either method):
$ comp-find.bash /tmp
/tmp/comptest/websters_english_dictionary.tmp.tar.gz 265.148 Thu Sep 10 07:53:37 AEST 2020
/tmp/comptest/What_is_Systems_Architecture_PART_1.tar.gz 1357.06 Thu Sep 10 08:17:47 AEST 2020
Note:
You can add a literal K to indicate the block size / units (kilobytes)
If you want to print the path only from this array, you can use suffix removal: printf '%s\n' "${files[#]&& *}"
For no date in the array (it's used to sort, but then its job may be done), simply remove $(date -d #${path##* }) (incl. the space).
Kind of tangential, but to use different date formats, replace $(date -d #${path##* }) with:
$(date -I -d #${path##* }) ISO format - note that short opts style: date -Id #[date] did not work for me
$(date -d #${path##* } +%Y-%M-%d_%H-%m-%S) like ISO, but w/ seconds
$(date -d #${path##* } +%Y-%M-%d_%H-%m-%S) same again, but w/ nanoseconds (find gives you nano seconds)
Sorry for the long post, hopefully it's informative.

Related

Counting the number of files in a directory that contain the different variables in my array - bash script

I have a bash script, which needs to check certain files for certain variables, and count how many files come back containing those variables.
As there is more than one variable I need to look for I decided to to use an array for the variables.
The code I am using is below:
#!/bin/bash
declare -a MYARRAY=('Variable One' 'Variable Two' 'Variable Three');
COUNT_MYARRAY=$(find $DIRECTORY -mtime -1 -exec grep -ln $MYARRAY {} \; | wc -l)
I have declared the $DIRECTORY in my real script.
However, it does not seem to pick up files if they have the second and third variable within?
Can anyone see where I might be going wrong?
You can use greps regex support and pass multiple expressions using 'var1\|var2'. First construct the grep argument and then execute grep.
You don't need line numbers -n to grep to count the files...
grep can handle multiple files - it will be faster to pass multiple files to one grep with -exec ... +, rather then spawn grep for each file.
UPPER_CASE_VARIABLES are shouting at me and by convention upper vase variables are reserved for exported variables.
myarray=('Variable One' 'Variable Two' 'Variable Three')
arg=$(printf "%s\|" "${MYARRAY[#]}" | sed 's/\\|$//')
directory=.
count_myarray=$(find "$directory" -type f -mtime -1 -exec grep -l "$arg" {} + | wc -l)
Alternatively: you can pass multiple -exec arguments to find. So first from myarray construct arguments to find in the form -exec grep -l <the var>. Note that multiple variables can be in same files, so get unique filenames after grepping.
myarray=('Variable One' 'Variable Two' 'Variable Three');
findargs=()
for i in "${MYARRAY[#]}"; do
findargs+=(-exec grep -l "$i" {} +)
done
directory=.
count_myarray=$(find "$directory" -type f -mtime -1 "${findargs[#]}" | sort -u | wc -l)
or similar:
count_myarray=$(printf '-exec\0grep\0-l\0%s\0{}\0+\0' "${myarray[#]}" | xargs -0 find "$directory" -type f -mtime -1 | sort -u | wc -l)
Remember to quote your variable expansions to protect against whitespaces or special characters in filenames and directory names.
Going wrong:
With echo $MYARRAY you find Variable One, not the string you want for grep.
Also note that it is better to use lowercase for your variable names. I will use ${directory} and not $DIRECTORY (and in double quotes for directories with a space).
You have more options with grep. When you want a file with 8 occurances counted one, you can not use the grep option -c. An useful option is -r. You are looking for something like
grep -Erl "Variable One|Variable Two|Variable Three" | wc -l
This is difficult when the variables might have special characters like $or |.
Another option of grep is using the option
-f FILE, Obtain patterns from FILE, one per line
So you should make a function that writes the variables to a file, and use something like
grep -rlFf "myVariablesFile" "${directory}" | wc -l
When the content of the file is changing rapidly, you might want to avoid the temporary file with
grep -rlFf <(function_that_writes_variables_to_stdout) "${directory}"| wc -l
or directly
grep -rlFf <(printf "%s\n" "${var1}" "${var2}" "${var3}") "${directory}" | wc -l

(pdksh) Looping through n files and store their dates in array

Using pdksh
stat() command is unavailable on system.
I need to loop through the amount of files found and store their dates in an array. $COMMAND stores the number of files found in $location as can be seen below.
Can someone help me please?
COMMAND=`find $location -type f | wc -l`
CMD_getDate=$(find $location -type f | xargs ls -lrt | awk '{print $6} {print $7}')
Well, first, you don't need to do the wc. The size of the array will tell you how many there are. Here's a simple way to build an array of dates and names (designed for pdksh; there are better ways to do it in AT&T ksh or bash):
set -A files
set -A dates
find "$location" -type f -ls |&
while read -p inum blocks symode links owner group size rest; do
set -A files "${files[#]}" "${rest##* }"
set -A dates "${dates[#]}" "${rest% *}"
done
Here's one way to examine the results:
print "Found ${#files[#]} files:"
let i=0
while (( i < ${#files[#]} )); do
print "The file '${files[i]}' was modified on ${dates[i]}."
let i+=1
done
This gives you the full date string, not just the month and day. That might be what you want - the date output of ls -l (or find -ls) is variable depending on how long ago the file was modified. Given these files, how does your original format distinguish between the modification times of a and b?
$ ls -l
total 0
-rw-rw-r--+ 1 mjreed staff 0 Feb 3 2014 a
-rw-rw-r--+ 1 mjreed staff 0 Feb 3 04:05 b
As written, the code above would yields this for the above directory with location=.:
Found 2 files:
The file './a' was modified on Feb 3 2014.
The file './b' was modified on Feb 3 00:00.
It would help if you indicated what the actual end goal is..

Script to group numbered files into folders

I have around a million files in one folder in the form xxxx_description.jpg where xxx is a number ranging from 100 to an unknown upper.
The list is similar to this:
146467_description1.jpg
146467_description2.jpg
146467_description3.jpg
146467_description4.jpg
14646_description1.jpg
14646_description2.jpg
14646_description3.jpg
146472_description1.jpg
146472_description2.jpg
146472_description3.jpg
146500_description1.jpg
146500_description2.jpg
146500_description3.jpg
146500_description4.jpg
146500_description5.jpg
146500_description6.jpg
To get the file number down in the at folder I'd like to put them all into folders grouped by the number at the start.
ie:
146467/146467_description1.jpg
146467/146467_description2.jpg
146467/146467_description3.jpg
146467/146467_description4.jpg
14646/14646_description1.jpg
14646/14646_description2.jpg
14646/14646_description3.jpg
146472/146472_description1.jpg
146472/146472_description2.jpg
146472/146472_description3.jpg
146500/146500_description1.jpg
146500/146500_description2.jpg
146500/146500_description3.jpg
146500/146500_description4.jpg
146500/146500_description5.jpg
146500/146500_description6.jpg
I was thinking to try and use command line: find | awk {} | mv command or maybe write a script, but I'm not sure how to do this most efficiently.
If you really are dealing with millions of files, I suspect that a glob (*.jpg or [0-9]*_*.jpg may fail because it makes a command line that's too long for the shell. If that's the case, you can still use find. Something like this might work:
find /path -name "[0-9]*_*.jpg" -exec sh -c 'f="{}"; mkdir -p "/target/${f%_*}"; mv "$f" "/target/${f%_*}/"' \;
Broken out for easier reading, this is what we're doing:
find /path - run find, with /path as a starting point,
-name "[0-9]*_*.jpg" - match files that match this filespec in all directories,
-exec sh -c execute the following on each file...
'f="{}"; - put the filename into a variable...
mkdir -p "/target/${f%_*}"; - make a target directory based on that variable (read mkdir's man page about the -p option)
mv "$f" "/target/${f%_*}/"' - move the file into the directory.
\; - end the -exec expression
On the up side, it can handle any number of files that find can handle (i.e. limited only by your OS). On the down side, it's launching a separate shell for each file to be handled.
Note that the above answer is for Bourne/POSIX/Bash. If you're using CSH or TCSH as your shell, the following might work instead:
#!/bin/tcsh
foreach f (*_*.jpg)
set split = ($f:as/_/ /)
mkdir -p "$split[1]"
mv "$f" "$split[1]/"
end
This assumes that the filespec will fit in tcsh's glob buffer. I've tested with 40000 files (894KB) on one command line and not had a problem using /bin/sh or /bin/csh in FreeBSD.
Like the Bourne/POSIX/Bash parameter expansion solution above, this avoids unnecessary calls to external I haven't tested that, and would recommend the find solution even though it's slower.
You can use this script:
for i in [0-9]*_*.jpg; do
p=`echo "$i" | sed 's/^\([0-9]*\)_.*/\1/'`
mkdir -p "$p"
mv "$i" "$p"
done
Using grep
for file in *.jpg;
do
dirName=$(echo $file | grep -oE '^[0-9]+')
[[ -d $dirName ]] || mkdir $dirName
mv $file $dirName
done
grep -oE '^[0-9]+' extracts the starting digits in the filename as
146467
146467
146467
146467
14646
...
[[ -d $dirName ]] returns 1 if the directory exists
[[ -d $dirName ]] || mkdir $dirName ensures that the mkdir works only if the test [[ -d $dirName ]] fails, that is the direcotry does not exists

Move files containing X but not containing Y

To manage my backup sync folder, I am trying to come up with a command that would move files beginning with string1* but NOT ending with *string2 from /folder1 to /folder2
What would a command containing such two opposite conditions (HAS and HAS NOT) look like?
#!/bin/bash
for i in `ls -d /folder1/string1* | grep -v 'string2$'`
do
ls -ld $i | grep '^-' > /dev/null # Test that we have a regular file and not a directory etc.
if [ $? == 0 ]; then
mv $i /folder2
fi
done
Try something like
find /folder1 -mindepth 1 -maxdepth 1 -type f \
-name 'string1*' \! -name '*string2' -exec cp -iv {} /folder2 +
Note: If your have a older version of find you can replace + with \;
To me this is another case for (what I shall denote) the read while pattern.
cd /folder1
ls string1* | grep -v 'string2$' | while read f; do mv $f /folder2; done
The other answers are good alternatives, and in particular, find can do a lot. But I always get a headache using find, and never quite use it enough to do so without the manpage open.
Also, starting with ls or a simple find to get a list of files, and then using any or all of sed, awk, grep or whatever you have to hand, to adjust/trim/extend this list, and then bunging it into a loop, is a crude(ish) but pretty powerful technique.

Using a variable to pass grep pattern in bash

I am struggling with passing several grep patterns that are contained within a variable. This is the code I have:
#!/bin/bash
GREP="$(which grep)"
GREP_MY_OPTIONS="-c"
for i in {-2..2}
do
GREP_MY_OPTIONS+=" -e "$(date --date="$i day" +'%Y-%m-%d')
done
echo $GREP_MY_OPTIONS
IFS=$'\n'
MYARRAY=( $(${GREP} ${GREP_MY_OPTIONS} "/home/user/this path has spaces in it/"*"/abc.xyz" | ${GREP} -v :0$ ) )
This is what I wanted it to do:
determine/define where grep is
assign a variable (GREP_MY_OPTIONS) holding parameters I will pass to grep
assign several patterns to GREP_MY_OPTIONS
using grep and the patterns I have stored in $GREP_MY_OPTIONS search several files within a path that contains spaces and hold them in an array
When I use "echo $GREP_MY_OPTIONS" it is generating what I expected but when I run the script it fails with an error of:
/bin/grep: invalid option -- ' '
What am I doing wrong? If the path does not have spaces in it everything seems to work fine so I think it is something to do with the IFS but I'm not sure.
If you want to grep some content in a set of paths, you can do the following:
find <directory> -type f -print0 |
grep "/home/user/this path has spaces in it/\"*\"/abc.xyz" |
xargs -I {} grep <your_options> -f <patterns> {}
So that <patterns> is a file containing the patterns you want to search for in each file from directory.
Considering your answer, this shall do what you want:
find "/path\ with\ spaces/" -type f | xargs -I {} grep -H -c -e 2013-01-17 {}
From man grep:
-H, --with-filename
Print the file name for each match. This is the default when
there is more than one file to search.
Since you want to insert the elements into an array, you can do the following:
IFS=$'\n'; array=( $(find "/path\ with\ spaces/" -type f -print0 |
xargs -I {} grep -H -c -e 2013-01-17 "{}") )
And then use the values as:
echo ${array[0]}
echo ${array[1]}
echo ${array[...]}
When using variables to pass the parameters, use eval to evaluate the entire line. Do the following:
parameters="-H -c"
eval "grep ${parameters} file"
If you build the GREP_MY_OPTIONS as an array instead of as a simple string, you can get the original outline script to work sensibly:
#!/bin/bash
path="/home/user/this path has spaces in it"
GREP="$(which grep)"
GREP_MY_OPTIONS=("-c")
j=1
for i in {-2..2}
do
GREP_MY_OPTIONS[$((j++))]="-e"
GREP_MY_OPTIONS[$((j++))]=$(date --date="$i day" +'%Y-%m-%d')
done
IFS=$'\n'
MYARRAY=( $(${GREP} "${GREP_MY_OPTIONS[#]}" "$path/"*"/abc.xyz" | ${GREP} -v :0$ ) )
I'm not clear why you use GREP="$(which grep)" since you will execute the same grep as if you wrote grep directly — unless, I suppose, you have some alias for grep (which is then the problem; don't alias grep).
You can do one thing without making things complex:
First do a change directory in your script like following:
cd /home/user/this\ path\ has\ spaces\ in\ it/
$ pwd
/home/user/this path has spaces in it
or
$ cd "/home/user/this path has spaces in it/"
$ pwd
/home/user/this path has spaces in it
Then do what ever your want in your script.
$(${GREP} ${GREP_MY_OPTIONS} */abc.xyz)
EDIT :
[sgeorge#sgeorge-ld stack1]$ ls -l
total 4
drwxr-xr-x 2 sgeorge eng 4096 Jan 19 06:05 test tesd
[sgeorge#sgeorge-ld stack1]$ cat test\ tesd/file
SUKU
[sgeorge#sgeorge-ld stack1]$ grep SUKU */file
SUKU
EDIT :
[sgeorge#sgeorge-ld stack1]$ find */* -print | xargs -I {} grep SUKU {}
SUKU

Resources