I am wondering why this array expression in Bash doesn't give me an array. It just gives me the first element in the string:
IFS='\n' read -r -a POSSIBLE_ENCODINGS <<< $(iconv -l)
I want to try out all available encodings to see how reading different file encodings for a script in R works, and I am using this Bash-script to create text-files with all possible encodings:
#!/bin/bash
IFS='\n' read -r -a POSSIBLE_ENCODINGS <<< $(iconv -l)
echo "${POSSIBLE_ENCODINGS[#]}"
for CURRENT_ENCODING in "${POSSIBLE_ENCODINGS[#]}"
do
TRIMMED=$(echo $CURRENT_ENCODING | sed 's:/*$::')
iconv --verbose --from-code=UTF-8 --to-code="$TRIMMED" --output=encoded-${TRIMMED}.txt first_file.txt
echo "Current encoding: ${TRIMMED}"
echo "Output file:encoded-${TRIMMED}.txt"
done
EDIT: Code edited according to answers below:
#!/bin/bash
readarray -t possibleEncodings <<< "$(iconv -l)"
echo "${possibleEncodings[#]}"
for currentEncoding in "${possibleEncodings[#]}"
do
trimmedEncoding=$(echo $currentEncoding | sed 's:/*$::')
echo "Trimmed encoding: ${trimmedEncoding}"
iconv --verbose --from-code=UTF-8 --to-code="$trimmedEncoding" --output=encoded-${trimmedEncoding}.txt first_file.txt
echo "Current encoding: ${trimmedEncoding}"
echo "Output file:encoded-${trimmedEncoding}.txt"
done
You could just readarray/mapfile instead which are tailor made for reading multi-line output into an array.
mapfile -t possibleEncodings < <(iconv -l)
The here-strings are useless, when you can just run the command in a process-substitution model. The <() puts the command output as if it appears on a file for mapfile to read from.
As for why your original attempt didn't work, you are just doing the read call once, but there is still strings to read in the subsequent lines. You either need to read till EOF in a loop or use the mapfile as above which does the job for you.
As a side-note always use lowercase letters for user defined variable/array and function names. This lets you distinguish your variables from the shell's own environment variables which are upper-cased.
because read reads only one line, following while can be used
arr=()
while read -r line; do
arr+=( "$line" )
done <<< "$(iconv -l)"
otherwise, there is also readarray builtin
readarray -t arr <<< "$(iconv -l)"
I found here, a code for Bash to be able to find a missing file, and this code works great because I wont be able to know the length of the sequenced files, so this is able to find the missing file without requiring me to input the last number in the sequence.
This is the code:
shopt -s extglob
shopt -s nullglob
arr=( +([0-9]).#(psd) )
for (( i=10#${arr[0]%.*}; i<=10#${arr[-1]%.*}; i++ )); do
printf -v f "%05d" $i;
[[ ! -f "$(echo "$f".*)" ]] && echo "$f is missing"
done
And it works in both terminal and iTerm.
BUT, when used in my Applescript it always reply with file 00000 is missing, when it is not:
set AEPname to "AO-M1P8"
set RENDERfolder to quoted form of POSIX path of "Volumes:RAID STORAGE:CACHES:RENDERS:AE"
set ISAEDONE to do shell script "cd /Volumes/RAID\\ STORAGE/CACHES/RENDERS/AE/AO-M1P8/
#!/bin/bash
shopt -s extglob
shopt -s nullglob
arr=( +([0-9]).#(psd) )
for (( i=10#${arr[0]%.*}; i<=10#${arr[-1]%.*}; i++ )); do
printf -v f \"%05d\" $i;
[[ ! -f \"$(echo \"$f\".*)\" ]] && echo \"$f is missing\"
done
"
display dialog ISAEDONE as text
(*
if ISAEDONE contains "is missing" then
display dialog "FILE IS MISSING"
else
display dialog "ALL FINE"
end if
*)
What I am doing wrong or is there an easier way to accomplish this?
Thanks in advance.
Screenshot
UPDATE
Seems like the way I was doing it, makes the shell unable to get the directory of the files, If I do manually input the directory, seems like it should work, but now I am getting a new kind of error:
sh: line 6: arr: bad array subscript
sh: line 6: arr: bad array subscript
Strange since I don't get this error when manually pasting the code into terminal.
I updated the code.
I'm trying to read output from sqlplus in the loop
while read -r line; do
[commands]
done < <(sqlplus -s ${user}/${pwd}#${database} #query.sql)
And all commands in the cycle work properly but loop isn't being closed!
I've tried already several solutions, such as
done=0
while read -r line; do
[commands]
if [ "$done" -ne 0 ]; then
break
fi
done < <(sqlplus -s ${user}/${pwd}#${database} #query.sql)
or
while read -r line || [[ -n "$line" ]]; do
[commands]
done < <(sqlplus -s ${user}/${pwd}#${database} #query.sql)
But they don't work as well.
+also I've checked that last line of result set has \n\r symbol
If anybody can help me understand why I'm having listed above issues or suggest to use some other approach - I'll be very grateful.
Thank you in advance.
A process substitution expands to the name of a special file or FIFO from which the output of the specified command can be read. It does not necessarily follow that end-of-file will be detected when no more program output is available; that's certainly not the behavior that would be expected from a FIFO.
I think you're using the wrong tool for the job. You should be able to use an ordinary pipe instead of a process substitution for this task:
sqlplus -s ${user}/${pwd}#${database} #query.sql | while read -r line; do
[commands]
done
As long as sqlplus indeed does exit, read will detect EOF after it does so, causing the loop to exit.
I am writing a bash function to get all git repositories, but I have met a problem when I want to store all the git repository pathnames to the array patharray. Here is the code:
gitrepo() {
local opt
declare -a patharray
locate -b '\.git' | \
while read pathname
do
pathname="$(dirname ${pathname})"
if [[ "${pathname}" != *.* ]]; then
# Note: how to add an element to an existing Bash Array
patharray=("${patharray[#]}" '\n' "${pathname}")
# echo -e ${patharray[#]}
fi
done
echo -e ${patharray[#]}
}
I want to save all the repository paths to the patharray array, but I can't get it outside the pipeline which is comprised of locate and while command.
But I can get the array in the pipeline command, the commented command # echo -e ${patharray[#]} works well if uncommented, so how can I solve the problem?
And I have tried the export command, however it seems that it can't pass the patharray to the pipeline.
Bash runs all commands of a pipeline in separate SubShells. When a subshell containing a while loop ends, all changes you made to the patharray variable are lost.
You can simply group the while loop and the echo statement together so they are both contained within the same subshell:
gitrepo() {
local pathname dir
local -a patharray
locate -b '\.git' | { # the grouping begins here
while read pathname; do
pathname=$(dirname "$pathname")
if [[ "$pathname" != *.* ]]; then
patharray+=( "$pathname" ) # add the element to the array
fi
done
printf "%s\n" "${patharray[#]}" # all those quotes are needed
} # the grouping ends here
}
Alternately, you can structure your code to not need a pipe: use ProcessSubstitution
( Also see the Bash manual for details - man bash | less +/Process\ Substitution):
gitrepo() {
local pathname dir
local -a patharray
while read pathname; do
pathname=$(dirname "$pathname")
if [[ "$pathname" != *.* ]]; then
patharray+=( "$pathname" ) # add the element to the array
fi
done < <(locate -b '\.git')
printf "%s\n" "${patharray[#]}" # all those quotes are needed
}
First of all, appending to an array variable is better done with array[${#array[*]}]="value" or array+=("value1" "value2" "etc") unless you wish to transform the entire array (which you don't).
Now, since pipeline commands are run in subprocesses, changes made to a variable inside a pipeline command will not propagate to outside it. There are a few options to get around this (most are listed in Greg's BashFAQ/024):
pass the result through stdout instead
the simplest; you'll need to do that anyway to get the value from the function (although there are ways to return a proper variable)
any special characters in paths can be handled reliably by using \0 as a separator (see Capturing output of find . -print0 into a bash array for reading \0-separated lists)
locate -b0 '\.git' | while read -r -d '' pathname; do dirname -z "$pathname"; done
or simply
locate -b0 '\.git' | xargs -0 dirname -z
avoid running the loop in a subprocess
avoid pipeline at all
temporary file/FIFO (bad: requires manual cleanup, accessible to others)
temporary variable (mediocre: unnecessary memory overhead)
process substitution (a special, syntax-supported case of FIFO, doesn't require manual cleanup; code adapted from Greg's BashFAQ/020):
i=0 #`unset i` will error on `i' usage if the `nounset` option is set
while IFS= read -r -d $'\0' file; do
patharray[i++]="$(dirname "$file")" # or however you want to process each file
done < <(locate -b0 '\.git')
use the lastpipe option (new in Bash 4.2) - doesn't run the last command of a pipeline in a subprocess (mediocre: has global effect)
I am trying to save the result from find as arrays.
Here is my code:
#!/bin/bash
echo "input : "
read input
echo "searching file with this pattern '${input}' under present directory"
array=`find . -name ${input}`
len=${#array[*]}
echo "found : ${len}"
i=0
while [ $i -lt $len ]
do
echo ${array[$i]}
let i++
done
I get 2 .txt files under current directory.
So I expect '2' as result of ${len}. However, it prints 1.
The reason is that it takes all result of find as one elements.
How can I fix this?
P.S
I found several solutions on StackOverFlow about a similar problem. However, they are a little bit different so I can't apply in my case. I need to store the results in a variable before the loop. Thanks again.
Update 2020 for Linux Users:
If you have an up-to-date version of bash (4.4-alpha or better), as you probably do if you are on Linux, then you should be using Benjamin W.'s answer.
If you are on Mac OS, which —last I checked— still used bash 3.2, or are otherwise using an older bash, then continue on to the next section.
Answer for bash 4.3 or earlier
Here is one solution for getting the output of find into a bash array:
array=()
while IFS= read -r -d $'\0'; do
array+=("$REPLY")
done < <(find . -name "${input}" -print0)
This is tricky because, in general, file names can have spaces, new lines, and other script-hostile characters. The only way to use find and have the file names safely separated from each other is to use -print0 which prints the file names separated with a null character. This would not be much of an inconvenience if bash's readarray/mapfile functions supported null-separated strings but they don't. Bash's read does and that leads us to the loop above.
[This answer was originally written in 2014. If you have a recent version of bash, please see the update below.]
How it works
The first line creates an empty array: array=()
Every time that the read statement is executed, a null-separated file name is read from standard input. The -r option tells read to leave backslash characters alone. The -d $'\0' tells read that the input will be null-separated. Since we omit the name to read, the shell puts the input into the default name: REPLY.
The array+=("$REPLY") statement appends the new file name to the array array.
The final line combines redirection and command substitution to provide the output of find to the standard input of the while loop.
Why use process substitution?
If we didn't use process substitution, the loop could be written as:
array=()
find . -name "${input}" -print0 >tmpfile
while IFS= read -r -d $'\0'; do
array+=("$REPLY")
done <tmpfile
rm -f tmpfile
In the above the output of find is stored in a temporary file and that file is used as standard input to the while loop. The idea of process substitution is to make such temporary files unnecessary. So, instead of having the while loop get its stdin from tmpfile, we can have it get its stdin from <(find . -name ${input} -print0).
Process substitution is widely useful. In many places where a command wants to read from a file, you can specify process substitution, <(...), instead of a file name. There is an analogous form, >(...), that can be used in place of a file name where the command wants to write to the file.
Like arrays, process substitution is a feature of bash and other advanced shells. It is not part of the POSIX standard.
Alternative: lastpipe
If desired, lastpipe can be used instead of process substitution (hat tip: Caesar):
set +m
shopt -s lastpipe
array=()
find . -name "${input}" -print0 | while IFS= read -r -d $'\0'; do array+=("$REPLY"); done; declare -p array
shopt -s lastpipe tells bash to run the last command in the pipeline in the current shell (not the background). This way, the array remains in existence after the pipeline completes. Because lastpipe only takes effect if job control is turned off, we run set +m. (In a script, as opposed to the command line, job control is off by default.)
Additional notes
The following command creates a shell variable, not a shell array:
array=`find . -name "${input}"`
If you wanted to create an array, you would need to put parens around the output of find. So, naively, one could:
array=(`find . -name "${input}"`) # don't do this
The problem is that the shell performs word splitting on the results of find so that the elements of the array are not guaranteed to be what you want.
Update 2019
Starting with version 4.4-alpha, bash now supports a -d option so that the above loop is no longer necessary. Instead, one can use:
mapfile -d $'\0' array < <(find . -name "${input}" -print0)
For more information on this, please see (and upvote) Benjamin W.'s answer.
Bash 4.4 introduced a -d option to readarray/mapfile, so this can now be solved with
readarray -d '' array < <(find . -name "$input" -print0)
for a method that works with arbitrary filenames including blanks, newlines, and globbing characters. This requires that your find supports -print0, as for example GNU find does.
From the manual (omitting other options):
mapfile [-d delim] [array]
-d
The first character of delim is used to terminate each input line, rather than newline. If delim is the empty string, mapfile will terminate a line when it reads a NUL character.
And readarray is just a synonym of mapfile.
The following appears to work for both Bash and Z Shell on macOS.
#! /bin/sh
IFS=$'\n'
paths=($(find . -name "foo"))
unset IFS
printf "%s\n" "${paths[#]}"
If you are using bash 4 or later, you can replace your use of find with
shopt -s globstar nullglob
array=( **/*"$input"* )
The ** pattern enabled by globstar matches 0 or more directories, allowing the pattern to match to an arbitrary depth in the current directory. Without the nullglob option, the pattern (after parameter expansion) is treated literally, so with no matches you would have an array with a single string rather than an empty array.
Add the dotglob option to the first line as well if you want to traverse hidden directories (like .ssh) and match hidden files (like .bashrc) as well.
you can try something like
array=(`find . -type f | sort -r | head -2`) , and in order to print the array values , you can try something like echo "${array[*]}"
None of these solutions suited me because I didn't feel like learning readarray and mapfile. Here is what I came up with.
#!/bin/bash
echo "input : "
read input
echo "searching file with this pattern '${input}' under present directory"
# The only change is here. Append to array for each non-empty line.
array=()
while read line; do
[[ ! -z "$line" ]] && array+=("$line")
done; <<< $(find . -name ${input} -print)
len=${#array[#]}
echo "found : ${len}"
i=0
while [ $i -lt $len ]
do
echo ${array[$i]}
let i++
done
You could do like this:
#!/bin/bash
echo "input : "
read input
echo "searching file with this pattern '${input}' under present directory"
array=(`find . -name '*'${input}'*'`)
for i in "${array[#]}"
do :
echo $i
done
In bash, $(<any_shell_cmd>) helps to run a command and capture the output. Passing this to IFS with \n as delimiter helps to convert that to an array.
IFS='\n' read -r -a txt_files <<< $(find /path/to/dir -name "*.txt")