In a Bash script, I would like to split a line into pieces and store them in an array.
For example, given the line:
Paris, France, Europe
I would like to have the resulting array to look like so:
array[0] = Paris
array[1] = France
array[2] = Europe
A simple implementation is preferable; speed does not matter. How can I do it?
IFS=', ' read -r -a array <<< "$string"
Note that the characters in $IFS are treated individually as separators so that in this case fields may be separated by either a comma or a space rather than the sequence of the two characters. Interestingly though, empty fields aren't created when comma-space appears in the input because the space is treated specially.
To access an individual element:
echo "${array[0]}"
To iterate over the elements:
for element in "${array[#]}"
do
echo "$element"
done
To get both the index and the value:
for index in "${!array[#]}"
do
echo "$index ${array[index]}"
done
The last example is useful because Bash arrays are sparse. In other words, you can delete an element or add an element and then the indices are not contiguous.
unset "array[1]"
array[42]=Earth
To get the number of elements in an array:
echo "${#array[#]}"
As mentioned above, arrays can be sparse so you shouldn't use the length to get the last element. Here's how you can in Bash 4.2 and later:
echo "${array[-1]}"
in any version of Bash (from somewhere after 2.05b):
echo "${array[#]: -1:1}"
Larger negative offsets select farther from the end of the array. Note the space before the minus sign in the older form. It is required.
All of the answers to this question are wrong in one way or another.
Wrong answer #1
IFS=', ' read -r -a array <<< "$string"
1: This is a misuse of $IFS. The value of the $IFS variable is not taken as a single variable-length string separator, rather it is taken as a set of single-character string separators, where each field that read splits off from the input line can be terminated by any character in the set (comma or space, in this example).
Actually, for the real sticklers out there, the full meaning of $IFS is slightly more involved. From the bash manual:
The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters <space>, <tab>, and <newline> are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.
Basically, for non-default non-null values of $IFS, fields can be separated with either (1) a sequence of one or more characters that are all from the set of "IFS whitespace characters" (that is, whichever of <space>, <tab>, and <newline> ("newline" meaning line feed (LF)) are present anywhere in $IFS), or (2) any non-"IFS whitespace character" that's present in $IFS along with whatever "IFS whitespace characters" surround it in the input line.
For the OP, it's possible that the second separation mode I described in the previous paragraph is exactly what he wants for his input string, but we can be pretty confident that the first separation mode I described is not correct at all. For example, what if his input string was 'Los Angeles, United States, North America'?
IFS=', ' read -ra a <<<'Los Angeles, United States, North America'; declare -p a;
## declare -a a=([0]="Los" [1]="Angeles" [2]="United" [3]="States" [4]="North" [5]="America")
2: Even if you were to use this solution with a single-character separator (such as a comma by itself, that is, with no following space or other baggage), if the value of the $string variable happens to contain any LFs, then read will stop processing once it encounters the first LF. The read builtin only processes one line per invocation. This is true even if you are piping or redirecting input only to the read statement, as we are doing in this example with the here-string mechanism, and thus unprocessed input is guaranteed to be lost. The code that powers the read builtin has no knowledge of the data flow within its containing command structure.
You could argue that this is unlikely to cause a problem, but still, it's a subtle hazard that should be avoided if possible. It is caused by the fact that the read builtin actually does two levels of input splitting: first into lines, then into fields. Since the OP only wants one level of splitting, this usage of the read builtin is not appropriate, and we should avoid it.
3: A non-obvious potential issue with this solution is that read always drops the trailing field if it is empty, although it preserves empty fields otherwise. Here's a demo:
string=', , a, , b, c, , , '; IFS=', ' read -ra a <<<"$string"; declare -p a;
## declare -a a=([0]="" [1]="" [2]="a" [3]="" [4]="b" [5]="c" [6]="" [7]="")
Maybe the OP wouldn't care about this, but it's still a limitation worth knowing about. It reduces the robustness and generality of the solution.
This problem can be solved by appending a dummy trailing delimiter to the input string just prior to feeding it to read, as I will demonstrate later.
Wrong answer #2
string="1:2:3:4:5"
set -f # avoid globbing (expansion of *).
array=(${string//:/ })
Similar idea:
t="one,two,three"
a=($(echo $t | tr ',' "\n"))
(Note: I added the missing parentheses around the command substitution which the answerer seems to have omitted.)
Similar idea:
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
These solutions leverage word splitting in an array assignment to split the string into fields. Funnily enough, just like read, general word splitting also uses the $IFS special variable, although in this case it is implied that it is set to its default value of <space><tab><newline>, and therefore any sequence of one or more IFS characters (which are all whitespace characters now) is considered to be a field delimiter.
This solves the problem of two levels of splitting committed by read, since word splitting by itself constitutes only one level of splitting. But just as before, the problem here is that the individual fields in the input string can already contain $IFS characters, and thus they would be improperly split during the word splitting operation. This happens to not be the case for any of the sample input strings provided by these answerers (how convenient...), but of course that doesn't change the fact that any code base that used this idiom would then run the risk of blowing up if this assumption were ever violated at some point down the line. Once again, consider my counterexample of 'Los Angeles, United States, North America' (or 'Los Angeles:United States:North America').
Also, word splitting is normally followed by filename expansion (aka pathname expansion aka globbing), which, if done, would potentially corrupt words containing the characters *, ?, or [ followed by ] (and, if extglob is set, parenthesized fragments preceded by ?, *, +, #, or !) by matching them against file system objects and expanding the words ("globs") accordingly. The first of these three answerers has cleverly undercut this problem by running set -f beforehand to disable globbing. Technically this works (although you should probably add set +f afterward to reenable globbing for subsequent code which may depend on it), but it's undesirable to have to mess with global shell settings in order to hack a basic string-to-array parsing operation in local code.
Another issue with this answer is that all empty fields will be lost. This may or may not be a problem, depending on the application.
Note: If you're going to use this solution, it's better to use the ${string//:/ } "pattern substitution" form of parameter expansion, rather than going to the trouble of invoking a command substitution (which forks the shell), starting up a pipeline, and running an external executable (tr or sed), since parameter expansion is purely a shell-internal operation. (Also, for the tr and sed solutions, the input variable should be double-quoted inside the command substitution; otherwise word splitting would take effect in the echo command and potentially mess with the field values. Also, the $(...) form of command substitution is preferable to the old `...` form since it simplifies nesting of command substitutions and allows for better syntax highlighting by text editors.)
Wrong answer #3
str="a, b, c, d" # assuming there is a space after ',' as in Q
arr=(${str//,/}) # delete all occurrences of ','
This answer is almost the same as #2. The difference is that the answerer has made the assumption that the fields are delimited by two characters, one of which being represented in the default $IFS, and the other not. He has solved this rather specific case by removing the non-IFS-represented character using a pattern substitution expansion and then using word splitting to split the fields on the surviving IFS-represented delimiter character.
This is not a very generic solution. Furthermore, it can be argued that the comma is really the "primary" delimiter character here, and that stripping it and then depending on the space character for field splitting is simply wrong. Once again, consider my counterexample: 'Los Angeles, United States, North America'.
Also, again, filename expansion could corrupt the expanded words, but this can be prevented by temporarily disabling globbing for the assignment with set -f and then set +f.
Also, again, all empty fields will be lost, which may or may not be a problem depending on the application.
Wrong answer #4
string='first line
second line
third line'
oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"
This is similar to #2 and #3 in that it uses word splitting to get the job done, only now the code explicitly sets $IFS to contain only the single-character field delimiter present in the input string. It should be repeated that this cannot work for multicharacter field delimiters such as the OP's comma-space delimiter. But for a single-character delimiter like the LF used in this example, it actually comes close to being perfect. The fields cannot be unintentionally split in the middle as we saw with previous wrong answers, and there is only one level of splitting, as required.
One problem is that filename expansion will corrupt affected words as described earlier, although once again this can be solved by wrapping the critical statement in set -f and set +f.
Another potential problem is that, since LF qualifies as an "IFS whitespace character" as defined earlier, all empty fields will be lost, just as in #2 and #3. This would of course not be a problem if the delimiter happens to be a non-"IFS whitespace character", and depending on the application it may not matter anyway, but it does vitiate the generality of the solution.
So, to sum up, assuming you have a one-character delimiter, and it is either a non-"IFS whitespace character" or you don't care about empty fields, and you wrap the critical statement in set -f and set +f, then this solution works, but otherwise not.
(Also, for information's sake, assigning a LF to a variable in bash can be done more easily with the $'...' syntax, e.g. IFS=$'\n';.)
Wrong answer #5
countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"
Similar idea:
IFS=', ' eval 'array=($string)'
This solution is effectively a cross between #1 (in that it sets $IFS to comma-space) and #2-4 (in that it uses word splitting to split the string into fields). Because of this, it suffers from most of the problems that afflict all of the above wrong answers, sort of like the worst of all worlds.
Also, regarding the second variant, it may seem like the eval call is completely unnecessary, since its argument is a single-quoted string literal, and therefore is statically known. But there's actually a very non-obvious benefit to using eval in this way. Normally, when you run a simple command which consists of a variable assignment only, meaning without an actual command word following it, the assignment takes effect in the shell environment:
IFS=', '; ## changes $IFS in the shell environment
This is true even if the simple command involves multiple variable assignments; again, as long as there's no command word, all variable assignments affect the shell environment:
IFS=', ' array=($countries); ## changes both $IFS and $array in the shell environment
But, if the variable assignment is attached to a command name (I like to call this a "prefix assignment") then it does not affect the shell environment, and instead only affects the environment of the executed command, regardless whether it is a builtin or external:
IFS=', ' :; ## : is a builtin command, the $IFS assignment does not outlive it
IFS=', ' env; ## env is an external command, the $IFS assignment does not outlive it
Relevant quote from the bash manual:
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
It is possible to exploit this feature of variable assignment to change $IFS only temporarily, which allows us to avoid the whole save-and-restore gambit like that which is being done with the $OIFS variable in the first variant. But the challenge we face here is that the command we need to run is itself a mere variable assignment, and hence it would not involve a command word to make the $IFS assignment temporary. You might think to yourself, well why not just add a no-op command word to the statement like the : builtin to make the $IFS assignment temporary? This does not work because it would then make the $array assignment temporary as well:
IFS=', ' array=($countries) :; ## fails; new $array value never escapes the : command
So, we're effectively at an impasse, a bit of a catch-22. But, when eval runs its code, it runs it in the shell environment, as if it was normal, static source code, and therefore we can run the $array assignment inside the eval argument to have it take effect in the shell environment, while the $IFS prefix assignment that is prefixed to the eval command will not outlive the eval command. This is exactly the trick that is being used in the second variant of this solution:
IFS=', ' eval 'array=($string)'; ## $IFS does not outlive the eval command, but $array does
So, as you can see, it's actually quite a clever trick, and accomplishes exactly what is required (at least with respect to assignment effectation) in a rather non-obvious way. I'm actually not against this trick in general, despite the involvement of eval; just be careful to single-quote the argument string to guard against security threats.
But again, because of the "worst of all worlds" agglomeration of problems, this is still a wrong answer to the OP's requirement.
Wrong answer #6
IFS=', '; array=(Paris, France, Europe)
IFS=' ';declare -a array=(Paris France Europe)
Um... what? The OP has a string variable that needs to be parsed into an array. This "answer" starts with the verbatim contents of the input string pasted into an array literal. I guess that's one way to do it.
It looks like the answerer may have assumed that the $IFS variable affects all bash parsing in all contexts, which is not true. From the bash manual:
IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is <space><tab><newline>.
So the $IFS special variable is actually only used in two contexts: (1) word splitting that is performed after expansion (meaning not when parsing bash source code) and (2) for splitting input lines into words by the read builtin.
Let me try to make this clearer. I think it might be good to draw a distinction between parsing and execution. Bash must first parse the source code, which obviously is a parsing event, and then later it executes the code, which is when expansion comes into the picture. Expansion is really an execution event. Furthermore, I take issue with the description of the $IFS variable that I just quoted above; rather than saying that word splitting is performed after expansion, I would say that word splitting is performed during expansion, or, perhaps even more precisely, word splitting is part of the expansion process. The phrase "word splitting" refers only to this step of expansion; it should never be used to refer to the parsing of bash source code, although unfortunately the docs do seem to throw around the words "split" and "words" a lot. Here's a relevant excerpt from the linux.die.net version of the bash manual:
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
You could argue the GNU version of the manual does slightly better, since it opts for the word "tokens" instead of "words" in the first sentence of the Expansion section:
Expansion is performed on the command line after it has been split into tokens.
The important point is, $IFS does not change the way bash parses source code. Parsing of bash source code is actually a very complex process that involves recognition of the various elements of shell grammar, such as command sequences, command lists, pipelines, parameter expansions, arithmetic substitutions, and command substitutions. For the most part, the bash parsing process cannot be altered by user-level actions like variable assignments (actually, there are some minor exceptions to this rule; for example, see the various compatxx shell settings, which can change certain aspects of parsing behavior on-the-fly). The upstream "words"/"tokens" that result from this complex parsing process are then expanded according to the general process of "expansion" as broken down in the above documentation excerpts, where word splitting of the expanded (expanding?) text into downstream words is simply one step of that process. Word splitting only touches text that has been spit out of a preceding expansion step; it does not affect literal text that was parsed right off the source bytestream.
Wrong answer #7
string='first line
second line
third line'
while read -r line; do lines+=("$line"); done <<<"$string"
This is one of the best solutions. Notice that we're back to using read. Didn't I say earlier that read is inappropriate because it performs two levels of splitting, when we only need one? The trick here is that you can call read in such a way that it effectively only does one level of splitting, specifically by splitting off only one field per invocation, which necessitates the cost of having to call it repeatedly in a loop. It's a bit of a sleight of hand, but it works.
But there are problems. First: When you provide at least one NAME argument to read, it automatically ignores leading and trailing whitespace in each field that is split off from the input string. This occurs whether $IFS is set to its default value or not, as described earlier in this post. Now, the OP may not care about this for his specific use-case, and in fact, it may be a desirable feature of the parsing behavior. But not everyone who wants to parse a string into fields will want this. There is a solution, however: A somewhat non-obvious usage of read is to pass zero NAME arguments. In this case, read will store the entire input line that it gets from the input stream in a variable named $REPLY, and, as a bonus, it does not strip leading and trailing whitespace from the value. This is a very robust usage of read which I've exploited frequently in my shell programming career. Here's a demonstration of the difference in behavior:
string=$' a b \n c d \n e f '; ## input string
a=(); while read -r line; do a+=("$line"); done <<<"$string"; declare -p a;
## declare -a a=([0]="a b" [1]="c d" [2]="e f") ## read trimmed surrounding whitespace
a=(); while read -r; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]=" a b " [1]=" c d " [2]=" e f ") ## no trimming
The second issue with this solution is that it does not actually address the case of a custom field separator, such as the OP's comma-space. As before, multicharacter separators are not supported, which is an unfortunate limitation of this solution. We could try to at least split on comma by specifying the separator to the -d option, but look what happens:
string='Paris, France, Europe';
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France")
Predictably, the unaccounted surrounding whitespace got pulled into the field values, and hence this would have to be corrected subsequently through trimming operations (this could also be done directly in the while-loop). But there's another obvious error: Europe is missing! What happened to it? The answer is that read returns a failing return code if it hits end-of-file (in this case we can call it end-of-string) without encountering a final field terminator on the final field. This causes the while-loop to break prematurely and we lose the final field.
Technically this same error afflicted the previous examples as well; the difference there is that the field separator was taken to be LF, which is the default when you don't specify the -d option, and the <<< ("here-string") mechanism automatically appends a LF to the string just before it feeds it as input to the command. Hence, in those cases, we sort of accidentally solved the problem of a dropped final field by unwittingly appending an additional dummy terminator to the input. Let's call this solution the "dummy-terminator" solution. We can apply the dummy-terminator solution manually for any custom delimiter by concatenating it against the input string ourselves when instantiating it in the here-string:
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; declare -p a;
declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")
There, problem solved. Another solution is to only break the while-loop if both (1) read returned failure and (2) $REPLY is empty, meaning read was not able to read any characters prior to hitting end-of-file. Demo:
a=(); while read -rd,|| [[ -n "$REPLY" ]]; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')
This approach also reveals the secretive LF that automatically gets appended to the here-string by the <<< redirection operator. It could of course be stripped off separately through an explicit trimming operation as described a moment ago, but obviously the manual dummy-terminator approach solves it directly, so we could just go with that. The manual dummy-terminator solution is actually quite convenient in that it solves both of these two problems (the dropped-final-field problem and the appended-LF problem) in one go.
So, overall, this is quite a powerful solution. It's only remaining weakness is a lack of support for multicharacter delimiters, which I will address later.
Wrong answer #8
string='first line
second line
third line'
readarray -t lines <<<"$string"
(This is actually from the same post as #7; the answerer provided two solutions in the same post.)
The readarray builtin, which is a synonym for mapfile, is ideal. It's a builtin command which parses a bytestream into an array variable in one shot; no messing with loops, conditionals, substitutions, or anything else. And it doesn't surreptitiously strip any whitespace from the input string. And (if -O is not given) it conveniently clears the target array before assigning to it. But it's still not perfect, hence my criticism of it as a "wrong answer".
First, just to get this out of the way, note that, just like the behavior of read when doing field-parsing, readarray drops the trailing field if it is empty. Again, this is probably not a concern for the OP, but it could be for some use-cases. I'll come back to this in a moment.
Second, as before, it does not support multicharacter delimiters. I'll give a fix for this in a moment as well.
Third, the solution as written does not parse the OP's input string, and in fact, it cannot be used as-is to parse it. I'll expand on this momentarily as well.
For the above reasons, I still consider this to be a "wrong answer" to the OP's question. Below I'll give what I consider to be the right answer.
Right answer
Here's a naïve attempt to make #8 work by just specifying the -d option:
string='Paris, France, Europe';
readarray -td, a <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')
We see the result is identical to the result we got from the double-conditional approach of the looping read solution discussed in #7. We can almost solve this with the manual dummy-terminator trick:
readarray -td, a <<<"$string,"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe" [3]=$'\n')
The problem here is that readarray preserved the trailing field, since the <<< redirection operator appended the LF to the input string, and therefore the trailing field was not empty (otherwise it would've been dropped). We can take care of this by explicitly unsetting the final array element after-the-fact:
readarray -td, a <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")
The only two problems that remain, which are actually related, are (1) the extraneous whitespace that needs to be trimmed, and (2) the lack of support for multicharacter delimiters.
The whitespace could of course be trimmed afterward (for example, see How to trim whitespace from a Bash variable?). But if we can hack a multicharacter delimiter, then that would solve both problems in one shot.
Unfortunately, there's no direct way to get a multicharacter delimiter to work. The best solution I've thought of is to preprocess the input string to replace the multicharacter delimiter with a single-character delimiter that will be guaranteed not to collide with the contents of the input string. The only character that has this guarantee is the NUL byte. This is because, in bash (though not in zsh, incidentally), variables cannot contain the NUL byte. This preprocessing step can be done inline in a process substitution. Here's how to do it using awk:
readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; }' <<<"$string, "); unset 'a[-1]';
declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")
There, finally! This solution will not erroneously split fields in the middle, will not cut out prematurely, will not drop empty fields, will not corrupt itself on filename expansions, will not automatically strip leading and trailing whitespace, will not leave a stowaway LF on the end, does not require loops, and does not settle for a single-character delimiter.
Trimming solution
Lastly, I wanted to demonstrate my own fairly intricate trimming solution using the obscure -C callback option of readarray. Unfortunately, I've run out of room against Stack Overflow's draconian 30,000 character post limit, so I won't be able to explain it. I'll leave that as an exercise for the reader.
function mfcb { local val="$4"; "$1"; eval "$2[$3]=\$val;"; };
function val_ltrim { if [[ "$val" =~ ^[[:space:]]+ ]]; then val="${val:${#BASH_REMATCH[0]}}"; fi; };
function val_rtrim { if [[ "$val" =~ [[:space:]]+$ ]]; then val="${val:0:${#val}-${#BASH_REMATCH[0]}}"; fi; };
function val_trim { val_ltrim; val_rtrim; };
readarray -c1 -C 'mfcb val_trim a' -td, <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")
Here is a way without setting IFS:
string="1:2:3:4:5"
set -f # avoid globbing (expansion of *).
array=(${string//:/ })
for i in "${!array[#]}"
do
echo "$i=>${array[i]}"
done
The idea is using string replacement:
${string//substring/replacement}
to replace all matches of $substring with white space and then using the substituted string to initialize a array:
(element1 element2 ... elementN)
Note: this answer makes use of the split+glob operator. Thus, to prevent expansion of some characters (such as *) it is a good idea to pause globbing for this script.
t="one,two,three"
a=($(echo "$t" | tr ',' '\n'))
echo "${a[2]}"
Prints three
Sometimes it happened to me that the method described in the accepted answer didn't work, especially if the separator is a carriage return.
In those cases I solved in this way:
string='first line
second line
third line'
oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"
for line in "${lines[#]}"
do
echo "--> $line"
done
The accepted answer works for values in one line. If the variable has several lines:
string='first line
second line
third line'
We need a very different command to get all lines:
while read -r line; do lines+=("$line"); done <<<"$string"
Or the much simpler bash readarray:
readarray -t lines <<<"$string"
Printing all lines is very easy taking advantage of a printf feature:
printf ">[%s]\n" "${lines[#]}"
>[first line]
>[ second line]
>[ third line]
if you use macOS and can't use readarray, you can simply do this-
MY_STRING="string1 string2 string3"
array=($MY_STRING)
To iterate over the elements:
for element in "${array[#]}"
do
echo $element
done
This works for me on OSX:
string="1 2 3 4 5"
declare -a array=($string)
If your string has different delimiter, just 1st replace those with space:
string="1,2,3,4,5"
delimiter=","
declare -a array=($(echo $string | tr "$delimiter" " "))
Simple :-)
This is similar to the approach by Jmoney38, but using sed:
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
echo ${array[0]}
Prints 1
The key to splitting your string into an array is the multi character delimiter of ", ". Any solution using IFS for multi character delimiters is inherently wrong since IFS is a set of those characters, not a string.
If you assign IFS=", " then the string will break on EITHER "," OR " " or any combination of them which is not an accurate representation of the two character delimiter of ", ".
You can use awk or sed to split the string, with process substitution:
#!/bin/bash
str="Paris, France, Europe"
array=()
while read -r -d $'\0' each; do # use a NUL terminated field separator
array+=("$each")
done < <(printf "%s" "$str" | awk '{ gsub(/,[ ]+|$/,"\0"); print }')
declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output
It is more efficient to use a regex you directly in Bash:
#!/bin/bash
str="Paris, France, Europe"
array=()
while [[ $str =~ ([^,]+)(,[ ]+|$) ]]; do
array+=("${BASH_REMATCH[1]}") # capture the field
i=${#BASH_REMATCH} # length of field + delimiter
str=${str:i} # advance the string by that length
done # the loop deletes $str, so make a copy if needed
declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output...
With the second form, there is no sub shell and it will be inherently faster.
Edit by bgoldst: Here are some benchmarks comparing my readarray solution to dawg's regex solution, and I also included the read solution for the heck of it (note: I slightly modified the regex solution for greater harmony with my solution) (also see my comments below the post):
## competitors
function c_readarray { readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); unset 'a[-1]'; };
function c_read { a=(); local REPLY=''; while read -r -d ''; do a+=("$REPLY"); done < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); };
function c_regex { a=(); local s="$1, "; while [[ $s =~ ([^,]+),\ ]]; do a+=("${BASH_REMATCH[1]}"); s=${s:${#BASH_REMATCH}}; done; };
## helper functions
function rep {
local -i i=-1;
for ((i = 0; i<$1; ++i)); do
printf %s "$2";
done;
}; ## end rep()
function testAll {
local funcs=();
local args=();
local func='';
local -i rc=-1;
while [[ "$1" != ':' ]]; do
func="$1";
if [[ ! "$func" =~ ^[_a-zA-Z][_a-zA-Z0-9]*$ ]]; then
echo "bad function name: $func" >&2;
return 2;
fi;
funcs+=("$func");
shift;
done;
shift;
args=("$#");
for func in "${funcs[#]}"; do
echo -n "$func ";
{ time $func "${args[#]}" >/dev/null 2>&1; } 2>&1| tr '\n' '/';
rc=${PIPESTATUS[0]}; if [[ $rc -ne 0 ]]; then echo "[$rc]"; else echo; fi;
done| column -ts/;
}; ## end testAll()
function makeStringToSplit {
local -i n=$1; ## number of fields
if [[ $n -lt 0 ]]; then echo "bad field count: $n" >&2; return 2; fi;
if [[ $n -eq 0 ]]; then
echo;
elif [[ $n -eq 1 ]]; then
echo 'first field';
elif [[ "$n" -eq 2 ]]; then
echo 'first field, last field';
else
echo "first field, $(rep $[$1-2] 'mid field, ')last field";
fi;
}; ## end makeStringToSplit()
function testAll_splitIntoArray {
local -i n=$1; ## number of fields in input string
local s='';
echo "===== $n field$(if [[ $n -ne 1 ]]; then echo 's'; fi;) =====";
s="$(makeStringToSplit "$n")";
testAll c_readarray c_read c_regex : "$s";
}; ## end testAll_splitIntoArray()
## results
testAll_splitIntoArray 1;
## ===== 1 field =====
## c_readarray real 0m0.067s user 0m0.000s sys 0m0.000s
## c_read real 0m0.064s user 0m0.000s sys 0m0.000s
## c_regex real 0m0.000s user 0m0.000s sys 0m0.000s
##
testAll_splitIntoArray 10;
## ===== 10 fields =====
## c_readarray real 0m0.067s user 0m0.000s sys 0m0.000s
## c_read real 0m0.064s user 0m0.000s sys 0m0.000s
## c_regex real 0m0.001s user 0m0.000s sys 0m0.000s
##
testAll_splitIntoArray 100;
## ===== 100 fields =====
## c_readarray real 0m0.069s user 0m0.000s sys 0m0.062s
## c_read real 0m0.065s user 0m0.000s sys 0m0.046s
## c_regex real 0m0.005s user 0m0.000s sys 0m0.000s
##
testAll_splitIntoArray 1000;
## ===== 1000 fields =====
## c_readarray real 0m0.084s user 0m0.031s sys 0m0.077s
## c_read real 0m0.092s user 0m0.031s sys 0m0.046s
## c_regex real 0m0.125s user 0m0.125s sys 0m0.000s
##
testAll_splitIntoArray 10000;
## ===== 10000 fields =====
## c_readarray real 0m0.209s user 0m0.093s sys 0m0.108s
## c_read real 0m0.333s user 0m0.234s sys 0m0.109s
## c_regex real 0m9.095s user 0m9.078s sys 0m0.000s
##
testAll_splitIntoArray 100000;
## ===== 100000 fields =====
## c_readarray real 0m1.460s user 0m0.326s sys 0m1.124s
## c_read real 0m2.780s user 0m1.686s sys 0m1.092s
## c_regex real 17m38.208s user 15m16.359s sys 2m19.375s
##
enter code herePure bash multi-character delimiter solution.
As others have pointed out in this thread, the OP's question gave an example of a comma delimited string to be parsed into an array, but did not indicate if he/she was only interested in comma delimiters, single character delimiters, or multi-character delimiters.
Since Google tends to rank this answer at or near the top of search results, I wanted to provide readers with a strong answer to the question of multiple character delimiters, since that is also mentioned in at least one response.
If you're in search of a solution to a multi-character delimiter problem, I suggest reviewing Mallikarjun M's post, in particular the response from gniourf_gniourf
who provides this elegant pure BASH solution using parameter expansion:
#!/bin/bash
str="LearnABCtoABCSplitABCaABCString"
delimiter=ABC
s=$str$delimiter
array=();
while [[ $s ]]; do
array+=( "${s%%"$delimiter"*}" );
s=${s#*"$delimiter"};
done;
declare -p array
Link to cited comment/referenced post
Link to cited question: Howto split a string on a multi-character delimiter in bash?
Update 3 Aug 2022
xebeche raised a good point in comments below. After reviewing their suggested edits, I've revised the script provided by gniourf_gniourf, and added remarks for ease of understanding what the script is doing. I also changed the double brackets [[]] to single brackets, for greater compatibility since many SHell variants do not support double bracket notation. In this case, for BaSH, the logic works inside single or double brackets.
#!/bin/bash
str="LearnABCtoABCSplitABCABCaABCStringABC"
delimiter="ABC"
array=()
while [ "$str" ]; do
# parse next sub-string, left of next delimiter
substring="${str%%"$delimiter"*}"
# when substring = delimiter, truncate leading delimiter
# (i.e. pattern is "$delimiter$delimiter")
[ -z "$substring" ] && str="${str#"$delimiter"}" && continue
# create next array element with parsed substring
array+=( "$substring" )
# remaining string to the right of delimiter becomes next string to be evaluated
str="${str:${#substring}}"
# prevent infinite loop when last substring = delimiter
[ "$str" == "$delimiter" ] && break
done
declare -p array
Without the comments:
#!/bin/bash
str="LearnABCtoABCSplitABCABCaABCStringABC"
delimiter="ABC"
array=()
while [ "$str" ]; do
substring="${str%%"$delimiter"*}"
[ -z "$substring" ] && str="${str#"$delimiter"}" && continue
array+=( "$substring" )
str="${str:${#substring}}"
[ "$str" == "$delimiter" ] && break
done
declare -p array
I was curious about the relative performance of the "Right answer"
in the popular answer by #bgoldst, with its apparent decrying of loops,
so I have done a simple benchmark of it against three pure bash implementations.
In summary, I suggest:
for string length < 4k or so, pure bash is faster than gawk
for delimiter length < 10 and string length < 256k, pure bash is comparable to gawk
for delimiter length >> 10 and string length < 64k or so, pure bash is "acceptable";
and gawk is less than 5x faster
for string length < 512k or so, gawk is "acceptable"
I arbitrarily define "acceptable" as "takes < 0.5s to split the string".
I am taking the problem to be to take a bash string and split it into a bash array, using an arbitrary-length delimiter string (not regex).
# in: $1=delim, $2=string
# out: sets array a
My pure bash implementations are:
# naive approach - slow
split_byStr_bash_naive(){
a=()
local prev=""
local cdr="$2"
[[ -z "${cdr}" ]] && a+=("")
while [[ "$cdr" != "$prev" ]]; do
prev="$cdr"
a+=( "${cdr%%"$1"*}" )
cdr="${cdr#*"$1"}"
done
# echo $( declare -p a | md5sum; declare -p a )
}
# use lengths wherever possible - faster
split_byStr_bash_faster(){
a=()
local car=""
local cdr="$2"
while
car="${cdr%%"$1"*}"
a+=("$car")
cdr="${cdr:${#car}}"
(( ${#cdr} ))
do
cdr="${cdr:${#1}}"
done
# echo $( declare -p a | md5sum; declare -p a )
}
# use pattern substitution and readarray - fastest
split_byStr_bash_sub(){
a=()
local delim="$1" string="$2"
delim="${delim//=/=-}"
delim="${delim//$'\n'/=n}"
string="${string//=/=-}"
string="${string//$'\n'/=n}"
readarray -td $'\n' a <<<"${string//"$delim"/$'\n'}"
local len=${#a[#]} i s
for (( i=0; i<len; i++ )); do
s="${a[$i]//=n/$'\n'}"
a[$i]="${s//=-/=}"
done
# echo $( declare -p a | md5sum; declare -p a )
}
The initial -z test in in the naive version handles the case of a zero-length
string being passed. Without the test, the output array is empty;
with it, the array has a single zero-length element.
Replacing readarray with while read gives < 10% slowdown.
This is the gawk implementation I used:
split_byRE_gawk(){
readarray -td '' a < <(awk '{gsub(/'"$1"'/,"\0")}1' <<<"$2$1")
unset 'a[-1]'
# echo $( declare -p a | md5sum; declare -p a )
}
Obviously, in the general case, the delim argument will need to be sanitised,
as gawk expects a regex, and gawk-special characters could cause problems.
Also, as-is, the implementation won't correctly handle newlines in the delimiter.
Since gawk is being used, a generalised version that handles more arbitrary
delimiters could be:
split_byREorStr_gawk(){
local delim=$1
local string=$2
local useRegex=${3:+1} # if set, delimiter is regex
readarray -td '' a < <(
export delim
gawk -v re="$useRegex" '
BEGIN {
RS = FS = "\0"
ORS = ""
d = ENVIRON["delim"]
# cf. https://stackoverflow.com/a/37039138
if (!re) gsub(/[\\.^$(){}\[\]|*+?]/,"\\\\&",d)
}
gsub(d"|\n$","\0")
' <<<"$string"
)
# echo $( declare -p a | md5sum; declare -p a )
}
or the same idea in Perl:
split_byREorStr_perl(){
local delim=$1
local string=$2
local regex=$3 # if set, delimiter is regex
readarray -td '' a < <(
export delim regex
perl -0777pe '
$d = $ENV{delim};
$d = "\Q$d\E" if ! $ENV{regex};
s/$d|\n$/\0/g;
' <<<"$string"
)
# echo $( declare -p a | md5sum; declare -p a )
}
The implementations produce identical output, tested by comparing md5sum separately.
Note that if input had been ambiguous ("logically incorrect" as #bgoldst puts it),
behaviour would diverge slightly. For example, with delimiter -- and string a- or a---:
#goldst's code returns: declare -a a=([0]="a") or declare -a a=([0]="a" [1]="")
mine return: declare -a a=([0]="a-") or declare -a a=([0]="a" [1]="-")
Arguments were derived with simple Perl scripts from:
delim="-=-="
base="ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"
Here are the tables of timing results (in seconds) for 3 different types
of string and delimiter argument.
#s - length of string argument
#d - length of delim argument
= - performance break-even point
! - "acceptable" performance limit (bash) is somewhere around here
!! - "acceptable" performance limit (gawk) is somewhere around here
- - function took too long
<!> - gawk command failed to run
Type 1
d=$(perl -e "print( '$delim' x (7*2**$n) )")
s=$(perl -e "print( '$delim' x (7*2**$n) . '$base' x (7*2**$n) )")
n
#s
#d
gawk
b_sub
b_faster
b_naive
0
252
28
0.002
0.000
0.000
0.000
1
504
56
0.005
0.000
0.000
0.001
2
1008
112
0.005
0.001
0.000
0.003
3
2016
224
0.006
0.001
0.000
0.009
4
4032
448
0.007
0.002
0.001
0.048
=
5
8064
896
0.014
0.008
0.005
0.377
6
16128
1792
0.018
0.029
0.017
(2.214)
7
32256
3584
0.033
0.057
0.039
(15.16)
!
8
64512
7168
0.063
0.214
0.128
-
9
129024
14336
0.111
(0.826)
(0.602)
-
10
258048
28672
0.214
(3.383)
(2.652)
-
!!
11
516096
57344
0.430
(13.46)
(11.00)
-
12
1032192
114688
(0.834)
(58.38)
-
-
13
2064384
229376
<!>
(228.9)
-
-
Type 2
d=$(perl -e "print( '$delim' x ($n) )")
s=$(perl -e "print( ('$delim' x ($n) . '$base' x $n ) x (2**($n-1)) )")
n
#s
#d
gawk
b_sub
b_faster
b_naive
0
0
0
0.003
0.000
0.000
0.000
1
36
4
0.003
0.000
0.000
0.000
2
144
8
0.005
0.000
0.000
0.000
3
432
12
0.005
0.000
0.000
0.000
4
1152
16
0.005
0.001
0.001
0.002
5
2880
20
0.005
0.001
0.002
0.003
6
6912
24
0.006
0.003
0.009
0.014
=
7
16128
28
0.012
0.012
0.037
0.044
8
36864
32
0.023
0.044
0.167
0.187
!
9
82944
36
0.049
0.192
(0.753)
(0.840)
10
184320
40
0.097
(0.925)
(3.682)
(4.016)
11
405504
44
0.204
(4.709)
(18.00)
(19.58)
!!
12
884736
48
0.444
(22.17)
-
-
13
1916928
52
(1.019)
(102.4)
-
-
Type 3
d=$(perl -e "print( '$delim' x (2**($n-1)) )")
s=$(perl -e "print( ('$delim' x (2**($n-1)) . '$base' x (2**($n-1)) ) x ($n) )")
n
#s
#d
gawk
b_sub
b_faster
b_naive
0
0
0
0.000
0.000
0.000
0.000
1
36
4
0.004
0.000
0.000
0.000
2
144
8
0.003
0.000
0.000
0.000
3
432
16
0.003
0.000
0.000
0.000
4
1152
32
0.005
0.001
0.001
0.002
5
2880
64
0.005
0.002
0.001
0.003
6
6912
128
0.006
0.003
0.003
0.014
=
7
16128
256
0.012
0.011
0.010
0.077
8
36864
512
0.023
0.046
0.046
(0.513)
!
9
82944
1024
0.049
0.195
0.197
(3.850)
10
184320
2048
0.103
(0.951)
(1.061)
(31.84)
11
405504
4096
0.222
(4.796)
-
-
!!
12
884736
8192
0.473
(22.88)
-
-
13
1916928
16384
(1.126)
(105.4)
-
-
Summary of delimiters length 1..10
As short delimiters are probably more likely than long,
summarised below are the results of varying delimiter length
between 1 and 10 (results for 2..9 mostly elided as very similar).
s1=$(perl -e "print( '$d' . '$base' x (7*2**$n) )")
s2=$(perl -e "print( ('$d' . '$base' x $n ) x (2**($n-1)) )")
s3=$(perl -e "print( ('$d' . '$base' x (2**($n-1)) ) x ($n) )")
bash_sub < gawk
string
n
#s
#d
gawk
b_sub
b_faster
b_naive
s1
10
229377
1
0.131
0.089
1.709
-
s1
10
229386
10
0.142
0.095
1.907
-
s2
8
32896
1
0.022
0.007
0.148
0.168
s2
8
34048
10
0.021
0.021
0.163
0.179
s3
12
786444
1
0.436
0.468
-
-
s3
12
786456
2
0.434
0.317
-
-
s3
12
786552
10
0.438
0.333
-
-
bash_sub < 0.5s
string
n
#s
#d
gawk
b_sub
b_faster
b_naive
s1
11
458753
1
0.256
0.332
(7.089)
-
s1
11
458762
10
0.269
0.387
(8.003)
-
s2
11
361472
1
0.205
0.283
(14.54)
-
s2
11
363520
3
0.207
0.462
(16.66)
-
s3
12
786444
1
0.436
0.468
-
-
s3
12
786456
2
0.434
0.317
-
-
s3
12
786552
10
0.438
0.333
-
-
gawk < 0.5s
string
n
#s
$d
gawk
b_sub
b_faster
b_naive
s1
11
458753
1
0.256
0.332
(7.089)
-
s1
11
458762
10
0.269
0.387
(8.003)
-
s2
12
788480
1
0.440
(1.252)
-
-
s2
12
806912
10
0.449
(4.968)
-
-
s3
12
786444
1
0.436
0.468
-
-
s3
12
786456
2
0.434
0.317
-
-
s3
12
786552
10
0.438
0.333
-
-
(I'm not entirely sure why bash_sub with s>160k and d=1 was consistently slower than d>1 for s3.)
All tests carried out with bash 5.0.17 on an Intel i7-7500U running xubuntu 20.04.
Try this
IFS=', '; array=(Paris, France, Europe)
for item in ${array[#]}; do echo $item; done
It's simple. If you want, you can also add a declare (and also remove the commas):
IFS=' ';declare -a array=(Paris France Europe)
The IFS is added to undo the above but it works without it in a fresh bash instance
#!/bin/bash
string="a | b c"
pattern=' | '
# replaces pattern with newlines
splitted="$(sed "s/$pattern/\n/g" <<< "$string")"
# Reads lines and put them in array
readarray -t array2 <<< "$splitted"
# Prints number of elements
echo ${#array2[#]}
# Prints all elements
for a in "${array2[#]}"; do
echo "> '$a'"
done
This solution works for larger delimiters (more than one char).
Doesn't work if you have a newline already in the original string
This works for the given data:
$ aaa='Paris, France, Europe'
$ mapfile -td ',' aaaa < <(echo -n "${aaa//, /,}")
$ declare -p aaaa
Result:
declare -a aaaa=([0]="Paris" [1]="France" [2]="Europe")
And it will also work for extended data with spaces, such as "New York":
$ aaa="New York, Paris, New Jersey, Hampshire"
$ mapfile -td ',' aaaa < <(echo -n "${aaa//, /,}")
$ declare -p aaaa
Result:
declare -a aaaa=([0]="New York" [1]="Paris" [2]="New Jersey" [3]="Hampshire")
Another way to do it without modifying IFS:
read -r -a myarray <<< "${string//, /$IFS}"
Rather than changing IFS to match our desired delimiter, we can replace all occurrences of our desired delimiter ", " with contents of $IFS via "${string//, /$IFS}".
Maybe this will be slow for very large strings though?
This is based on Dennis Williamson's answer.
I came across this post when looking to parse an input like:
word1,word2,...
none of the above helped me. solved it by using awk. If it helps someone:
STRING="value1,value2,value3"
array=`echo $STRING | awk -F ',' '{ s = $1; for (i = 2; i <= NF; i++) s = s "\n"$i; print s; }'`
for word in ${array}
do
echo "This is the word $word"
done
UPDATE: Don't do this, due to problems with eval.
With slightly less ceremony:
IFS=', ' eval 'array=($string)'
e.g.
string="foo, bar,baz"
IFS=', ' eval 'array=($string)'
echo ${array[1]} # -> bar
Do not change IFS!
Here's a simple bash one-liner:
read -a my_array <<< $(echo ${INPUT_STRING} | tr -d ' ' | tr ',' ' ')
Here's my hack!
Splitting strings by strings is a pretty boring thing to do using bash. What happens is that we have limited approaches that only work in a few cases (split by ";", "/", "." and so on) or we have a variety of side effects in the outputs.
The approach below has required a number of maneuvers, but I believe it will work for most of our needs!
#!/bin/bash
# --------------------------------------
# SPLIT FUNCTION
# ----------------
F_SPLIT_R=()
f_split() {
: 'It does a "split" into a given string and returns an array.
Args:
TARGET_P (str): Target string to "split".
DELIMITER_P (Optional[str]): Delimiter used to "split". If not
informed the split will be done by spaces.
Returns:
F_SPLIT_R (array): Array with the provided string separated by the
informed delimiter.
'
F_SPLIT_R=()
TARGET_P=$1
DELIMITER_P=$2
if [ -z "$DELIMITER_P" ] ; then
DELIMITER_P=" "
fi
REMOVE_N=1
if [ "$DELIMITER_P" == "\n" ] ; then
REMOVE_N=0
fi
# NOTE: This was the only parameter that has been a problem so far!
# By Questor
# [Ref.: https://unix.stackexchange.com/a/390732/61742]
if [ "$DELIMITER_P" == "./" ] ; then
DELIMITER_P="[.]/"
fi
if [ ${REMOVE_N} -eq 1 ] ; then
# NOTE: Due to bash limitations we have some problems getting the
# output of a split by awk inside an array and so we need to use
# "line break" (\n) to succeed. Seen this, we remove the line breaks
# momentarily afterwards we reintegrate them. The problem is that if
# there is a line break in the "string" informed, this line break will
# be lost, that is, it is erroneously removed in the output!
# By Questor
TARGET_P=$(awk 'BEGIN {RS="dn"} {gsub("\n", "3F2C417D448C46918289218B7337FCAF"); printf $0}' <<< "${TARGET_P}")
fi
# NOTE: The replace of "\n" by "3F2C417D448C46918289218B7337FCAF" results
# in more occurrences of "3F2C417D448C46918289218B7337FCAF" than the
# amount of "\n" that there was originally in the string (one more
# occurrence at the end of the string)! We can not explain the reason for
# this side effect. The line below corrects this problem! By Questor
TARGET_P=${TARGET_P%????????????????????????????????}
SPLIT_NOW=$(awk -F"$DELIMITER_P" '{for(i=1; i<=NF; i++){printf "%s\n", $i}}' <<< "${TARGET_P}")
while IFS= read -r LINE_NOW ; do
if [ ${REMOVE_N} -eq 1 ] ; then
# NOTE: We use "'" to prevent blank lines with no other characters
# in the sequence being erroneously removed! We do not know the
# reason for this side effect! By Questor
LN_NOW_WITH_N=$(awk 'BEGIN {RS="dn"} {gsub("3F2C417D448C46918289218B7337FCAF", "\n"); printf $0}' <<< "'${LINE_NOW}'")
# NOTE: We use the commands below to revert the intervention made
# immediately above! By Questor
LN_NOW_WITH_N=${LN_NOW_WITH_N%?}
LN_NOW_WITH_N=${LN_NOW_WITH_N#?}
F_SPLIT_R+=("$LN_NOW_WITH_N")
else
F_SPLIT_R+=("$LINE_NOW")
fi
done <<< "$SPLIT_NOW"
}
# --------------------------------------
# HOW TO USE
# ----------------
STRING_TO_SPLIT="
* How do I list all databases and tables using psql?
\"
sudo -u postgres /usr/pgsql-9.4/bin/psql -c \"\l\"
sudo -u postgres /usr/pgsql-9.4/bin/psql <DB_NAME> -c \"\dt\"
\"
\"
\list or \l: list all databases
\dt: list all tables in the current database
\"
[Ref.: https://dba.stackexchange.com/questions/1285/how-do-i-list-all-databases-and-tables-using-psql]
"
f_split "$STRING_TO_SPLIT" "bin/psql -c"
# --------------------------------------
# OUTPUT AND TEST
# ----------------
ARR_LENGTH=${#F_SPLIT_R[*]}
for (( i=0; i<=$(( $ARR_LENGTH -1 )); i++ )) ; do
echo " > -----------------------------------------"
echo "${F_SPLIT_R[$i]}"
echo " < -----------------------------------------"
done
if [ "$STRING_TO_SPLIT" == "${F_SPLIT_R[0]}bin/psql -c${F_SPLIT_R[1]}" ] ; then
echo " > -----------------------------------------"
echo "The strings are the same!"
echo " < -----------------------------------------"
fi
For multilined elements, why not something like
$ array=($(echo -e $'a a\nb b' | tr ' ' '§')) && array=("${array[#]//§/ }") && echo "${array[#]/%/ INTERELEMENT}"
a a INTERELEMENT b b INTERELEMENT
Since there are so many ways to solve this, let's start by defining what we want to see in our solution.
Bash provides a builtin readarray for this purpose. Let's use it.
Avoid ugly and unnecessary tricks such as changing IFS, looping, using eval, or adding an extra element then removing it.
Find a simple, readable approach that can easily be adapted to similar problems.
The readarray command is easiest to use with newlines as the delimiter. With other delimiters it may add an extra element to the array. The cleanest approach is to first adapt our input into a form that works nicely with readarray before passing it in.
The input in this example does not have a multi-character delimiter. If we apply a little common sense, it's best understood as comma separated input for which each element may need to be trimmed. My solution is to split the input by comma into multiple lines, trim each element, and pass it all to readarray.
string=' Paris,France , All of Europe '
readarray -t foo < <(tr ',' '\n' <<< "$string" |sed 's/^ *//' |sed 's/ *$//')
# Result:
declare -p foo
# declare -a foo='([0]="Paris" [1]="France" [2]="All of Europe")'
EDIT: My solution allows inconsistent spacing around comma separators, while also allowing elements to contain spaces. Few other solutions can handle these special cases.
I also avoid approaches which seem like hacks, such as creating an extra array element and then removing it. If you don't agree it's the best answer here, please leave a comment to explain.
If you'd like to try the same approach purely in Bash and with fewer subshells, it's possible. But the result is harder to read, and this optimization is probably unnecessary.
string=' Paris,France , All of Europe '
foo="${string#"${string%%[![:space:]]*}"}"
foo="${foo%"${foo##*[![:space:]]}"}"
foo="${foo//+([[:space:]]),/,}"
foo="${foo//,+([[:space:]])/,}"
readarray -t foo < <(echo "$foo")
Another way would be:
string="Paris, France, Europe"
IFS=', ' arr=(${string})
Now your elements are stored in "arr" array.
To iterate through the elements:
for i in ${arr[#]}; do echo $i; done
Another approach can be:
str="a, b, c, d" # assuming there is a space after ',' as in Q
arr=(${str//,/}) # delete all occurrences of ','
After this 'arr' is an array with four strings.
This doesn't require dealing IFS or read or any other special stuff hence much simpler and direct.
Related
I have an array in Bash, for example:
array=(a c b f 3 5)
I need to sort the array. Not just displaying the content in a sorted way, but to get a new array with the sorted elements. The new sorted array can be a completely new one or the old one.
You don't really need all that much code:
IFS=$'\n' sorted=($(sort <<<"${array[*]}"))
unset IFS
Supports whitespace in elements (as long as it's not a newline), and works in Bash 3.x.
e.g.:
$ array=("a c" b f "3 5")
$ IFS=$'\n' sorted=($(sort <<<"${array[*]}")); unset IFS
$ printf "[%s]\n" "${sorted[#]}"
[3 5]
[a c]
[b]
[f]
Note: #sorontar has pointed out that care is required if elements contain wildcards such as * or ?:
The sorted=($(...)) part is using the "split and glob" operator. You should turn glob off: set -f or set -o noglob or shopt -op noglob or an element of the array like * will be expanded to a list of files.
What's happening:
The result is a culmination six things that happen in this order:
IFS=$'\n'
"${array[*]}"
<<<
sort
sorted=($(...))
unset IFS
First, the IFS=$'\n'
This is an important part of our operation that affects the outcome of 2 and 5 in the following way:
Given:
"${array[*]}" expands to every element delimited by the first character of IFS
sorted=() creates elements by splitting on every character of IFS
IFS=$'\n' sets things up so that elements are expanded using a new line as the delimiter, and then later created in a way that each line becomes an element. (i.e. Splitting on a new line.)
Delimiting by a new line is important because that's how sort operates (sorting per line). Splitting by only a new line is not-as-important, but is needed preserve elements that contain spaces or tabs.
The default value of IFS is a space, a tab, followed by a new line, and would be unfit for our operation.
Next, the sort <<<"${array[*]}" part
<<<, called here strings, takes the expansion of "${array[*]}", as explained above, and feeds it into the standard input of sort.
With our example, sort is fed this following string:
a c
b
f
3 5
Since sort sorts, it produces:
3 5
a c
b
f
Next, the sorted=($(...)) part
The $(...) part, called command substitution, causes its content (sort <<<"${array[*]}) to run as a normal command, while taking the resulting standard output as the literal that goes where ever $(...) was.
In our example, this produces something similar to simply writing:
sorted=(3 5
a c
b
f
)
sorted then becomes an array that's created by splitting this literal on every new line.
Finally, the unset IFS
This resets the value of IFS to the default value, and is just good practice.
It's to ensure we don't cause trouble with anything that relies on IFS later in our script. (Otherwise we'd need to remember that we've switched things around--something that might be impractical for complex scripts.)
Original response:
array=(a c b "f f" 3 5)
readarray -t sorted < <(for a in "${array[#]}"; do echo "$a"; done | sort)
output:
$ for a in "${sorted[#]}"; do echo "$a"; done
3
5
a
b
c
f f
Note this version copes with values that contains special characters or whitespace (except newlines)
Note readarray is supported in bash 4+.
Edit Based on the suggestion by #Dimitre I had updated it to:
readarray -t sorted < <(printf '%s\0' "${array[#]}" | sort -z | xargs -0n1)
which has the benefit of even understanding sorting elements with newline characters embedded correctly. Unfortunately, as correctly signaled by #ruakh this didn't mean the the result of readarray would be correct, because readarray has no option to use NUL instead of regular newlines as line-separators.
If you don't need to handle special shell characters in the array elements:
array=(a c b f 3 5)
sorted=($(printf '%s\n' "${array[#]}"|sort))
With bash you'll need an external sorting program anyway.
With zsh no external programs are needed and special shell characters are easily handled:
% array=('a a' c b f 3 5); printf '%s\n' "${(o)array[#]}"
3
5
a a
b
c
f
ksh has set -s to sort ASCIIbetically.
Here's a pure Bash quicksort implementation:
#!/bin/bash
# quicksorts positional arguments
# return is in array qsort_ret
qsort() {
local pivot i smaller=() larger=()
qsort_ret=()
(($#==0)) && return 0
pivot=$1
shift
for i; do
# This sorts strings lexicographically.
if [[ $i < $pivot ]]; then
smaller+=( "$i" )
else
larger+=( "$i" )
fi
done
qsort "${smaller[#]}"
smaller=( "${qsort_ret[#]}" )
qsort "${larger[#]}"
larger=( "${qsort_ret[#]}" )
qsort_ret=( "${smaller[#]}" "$pivot" "${larger[#]}" )
}
Use as, e.g.,
$ array=(a c b f 3 5)
$ qsort "${array[#]}"
$ declare -p qsort_ret
declare -a qsort_ret='([0]="3" [1]="5" [2]="a" [3]="b" [4]="c" [5]="f")'
This implementation is recursive… so here's an iterative quicksort:
#!/bin/bash
# quicksorts positional arguments
# return is in array qsort_ret
# Note: iterative, NOT recursive! :)
qsort() {
(($#==0)) && return 0
local stack=( 0 $(($#-1)) ) beg end i pivot smaller larger
qsort_ret=("$#")
while ((${#stack[#]})); do
beg=${stack[0]}
end=${stack[1]}
stack=( "${stack[#]:2}" )
smaller=() larger=()
pivot=${qsort_ret[beg]}
for ((i=beg+1;i<=end;++i)); do
if [[ "${qsort_ret[i]}" < "$pivot" ]]; then
smaller+=( "${qsort_ret[i]}" )
else
larger+=( "${qsort_ret[i]}" )
fi
done
qsort_ret=( "${qsort_ret[#]:0:beg}" "${smaller[#]}" "$pivot" "${larger[#]}" "${qsort_ret[#]:end+1}" )
if ((${#smaller[#]}>=2)); then stack+=( "$beg" "$((beg+${#smaller[#]}-1))" ); fi
if ((${#larger[#]}>=2)); then stack+=( "$((end-${#larger[#]}+1))" "$end" ); fi
done
}
In both cases, you can change the order you use: I used string comparisons, but you can use arithmetic comparisons, compare wrt file modification time, etc. just use the appropriate test; you can even make it more generic and have it use a first argument that is the test function use, e.g.,
#!/bin/bash
# quicksorts positional arguments
# return is in array qsort_ret
# Note: iterative, NOT recursive! :)
# First argument is a function name that takes two arguments and compares them
qsort() {
(($#<=1)) && return 0
local compare_fun=$1
shift
local stack=( 0 $(($#-1)) ) beg end i pivot smaller larger
qsort_ret=("$#")
while ((${#stack[#]})); do
beg=${stack[0]}
end=${stack[1]}
stack=( "${stack[#]:2}" )
smaller=() larger=()
pivot=${qsort_ret[beg]}
for ((i=beg+1;i<=end;++i)); do
if "$compare_fun" "${qsort_ret[i]}" "$pivot"; then
smaller+=( "${qsort_ret[i]}" )
else
larger+=( "${qsort_ret[i]}" )
fi
done
qsort_ret=( "${qsort_ret[#]:0:beg}" "${smaller[#]}" "$pivot" "${larger[#]}" "${qsort_ret[#]:end+1}" )
if ((${#smaller[#]}>=2)); then stack+=( "$beg" "$((beg+${#smaller[#]}-1))" ); fi
if ((${#larger[#]}>=2)); then stack+=( "$((end-${#larger[#]}+1))" "$end" ); fi
done
}
Then you can have this comparison function:
compare_mtime() { [[ $1 -nt $2 ]]; }
and use:
$ qsort compare_mtime *
$ declare -p qsort_ret
to have the files in current folder sorted by modification time (newest first).
NOTE. These functions are pure Bash! no external utilities, and no subshells! they are safe wrt any funny symbols you may have (spaces, newline characters, glob characters, etc.).
NOTE2. The test [[ $i < $pivot ]] is correct. It uses the lexicographical string comparison. If your array only contains integers and you want to sort numerically, use ((i < pivot)) instead.
Please don't edit this answer to change that. It has already been edited (and rolled back) a couple of times. The test I gave here is correct and corresponds to the output given in the example: the example uses both strings and numbers, and the purpose is to sort it in lexicographical order. Using ((i < pivot)) in this case is wrong.
tl;dr:
Sort array a_in and store the result in a_out (elements must not have embedded newlines[1]
):
Bash v4+:
readarray -t a_out < <(printf '%s\n' "${a_in[#]}" | sort)
Bash v3:
IFS=$'\n' read -d '' -r -a a_out < <(printf '%s\n' "${a_in[#]}" | sort)
Advantages over antak's solution:
You needn't worry about accidental globbing (accidental interpretation of the array elements as filename patterns), so no extra command is needed to disable globbing (set -f, and set +f to restore it later).
You needn't worry about resetting IFS with unset IFS.[2]
Optional reading: explanation and sample code
The above combines Bash code with external utility sort for a solution that works with arbitrary single-line elements and either lexical or numerical sorting (optionally by field):
Performance: For around 20 elements or more, this will be faster than a pure Bash solution - significantly and increasingly so once you get beyond around 100 elements.
(The exact thresholds will depend on your specific input, machine, and platform.)
The reason it is fast is that it avoids Bash loops.
printf '%s\n' "${a_in[#]}" | sort performs the sorting (lexically, by default - see sort's POSIX spec):
"${a_in[#]}" safely expands to the elements of array a_in as individual arguments, whatever they contain (including whitespace).
printf '%s\n' then prints each argument - i.e., each array element - on its own line, as-is.
Note the use of a process substitution (<(...)) to provide the sorted output as input to read / readarray (via redirection to stdin, <), because read / readarray must run in the current shell (must not run in a subshell) in order for output variable a_out to be visible to the current shell (for the variable to remain defined in the remainder of the script).
Reading sort's output into an array variable:
Bash v4+: readarray -t a_out reads the individual lines output by sort into the elements of array variable a_out, without including the trailing \n in each element (-t).
Bash v3: readarray doesn't exist, so read must be used:
IFS=$'\n' read -d '' -r -a a_out tells read to read into array (-a) variable a_out, reading the entire input, across lines (-d ''), but splitting it into array elements by newlines (IFS=$'\n'. $'\n', which produces a literal newline (LF), is a so-called ANSI C-quoted string).
(-r, an option that should virtually always be used with read, disables unexpected handling of \ characters.)
Annotated sample code:
#!/usr/bin/env bash
# Define input array `a_in`:
# Note the element with embedded whitespace ('a c')and the element that looks like
# a glob ('*'), chosen to demonstrate that elements with line-internal whitespace
# and glob-like contents are correctly preserved.
a_in=( 'a c' b f 5 '*' 10 )
# Sort and store output in array `a_out`
# Saving back into `a_in` is also an option.
IFS=$'\n' read -d '' -r -a a_out < <(printf '%s\n' "${a_in[#]}" | sort)
# Bash 4.x: use the simpler `readarray -t`:
# readarray -t a_out < <(printf '%s\n' "${a_in[#]}" | sort)
# Print sorted output array, line by line:
printf '%s\n' "${a_out[#]}"
Due to use of sort without options, this yields lexical sorting (digits sort before letters, and digit sequences are treated lexically, not as numbers):
*
10
5
a c
b
f
If you wanted numerical sorting by the 1st field, you'd use sort -k1,1n instead of just sort, which yields (non-numbers sort before numbers, and numbers sort correctly):
*
a c
b
f
5
10
[1] To handle elements with embedded newlines, use the following variant (Bash v4+, with GNU sort):
readarray -d '' -t a_out < <(printf '%s\0' "${a_in[#]}" | sort -z).
Michał Górny's helpful answer has a Bash v3 solution.
[2] While IFS is set in the Bash v3 variant, the change is scoped to the command.
By contrast, what follows IFS=$'\n' in antak's answer is an assignment rather than a command, in which case the IFS change is global.
In the 3-hour train trip from Munich to Frankfurt (which I had trouble to reach because Oktoberfest starts tomorrow) I was thinking about my first post. Employing a global array is a much better idea for a general sort function. The following function handles arbitary strings (newlines, blanks etc.):
declare BSORT=()
function bubble_sort()
{ #
# #param [ARGUMENTS]...
#
# Sort all positional arguments and store them in global array BSORT.
# Without arguments sort this array. Return the number of iterations made.
#
# Bubble sorting lets the heaviest element sink to the bottom.
#
(($# > 0)) && BSORT=("$#")
local j=0 ubound=$((${#BSORT[*]} - 1))
while ((ubound > 0))
do
local i=0
while ((i < ubound))
do
if [ "${BSORT[$i]}" \> "${BSORT[$((i + 1))]}" ]
then
local t="${BSORT[$i]}"
BSORT[$i]="${BSORT[$((i + 1))]}"
BSORT[$((i + 1))]="$t"
fi
((++i))
done
((++j))
((--ubound))
done
echo $j
}
bubble_sort a c b 'z y' 3 5
echo ${BSORT[#]}
This prints:
3 5 a b c z y
The same output is created from
BSORT=(a c b 'z y' 3 5)
bubble_sort
echo ${BSORT[#]}
Note that probably Bash internally uses smart-pointers, so the swap-operation could be cheap (although I doubt it). However, bubble_sort demonstrates that more advanced functions like merge_sort are also in the reach of the shell language.
Another solution that uses external sort and copes with any special characters (except for NULs :)). Should work with bash-3.2 and GNU or BSD sort (sadly, POSIX doesn't include -z).
local e new_array=()
while IFS= read -r -d '' e; do
new_array+=( "${e}" )
done < <(printf "%s\0" "${array[#]}" | LC_ALL=C sort -z)
First look at the input redirection at the end. We're using printf built-in to write out the array elements, zero-terminated. The quoting makes sure array elements are passed as-is, and specifics of shell printf cause it to reuse the last part of format string for each remaining parameter. That is, it's equivalent to something like:
for e in "${array[#]}"; do
printf "%s\0" "${e}"
done
The null-terminated element list is then passed to sort. The -z option causes it to read null-terminated elements, sort them and output null-terminated as well. If you needed to get only the unique elements, you can pass -u since it is more portable than uniq -z. The LC_ALL=C ensures stable sort order independently of locale — sometimes useful for scripts. If you want the sort to respect locale, remove that.
The <() construct obtains the descriptor to read from the spawned pipeline, and < redirects the standard input of the while loop to it. If you need to access the standard input inside the pipe, you may use another descriptor — exercise for the reader :).
Now, back to the beginning. The read built-in reads output from the redirected stdin. Setting empty IFS disables word splitting which is unnecessary here — as a result, read reads the whole 'line' of input to the single provided variable. -r option disables escape processing that is undesired here as well. Finally, -d '' sets the line delimiter to NUL — that is, tells read to read zero-terminated strings.
As a result, the loop is executed once for every successive zero-terminated array element, with the value being stored in e. The example just puts the items in another array but you may prefer to process them directly :).
Of course, that's just one of the many ways of achieving the same goal. As I see it, it is simpler than implementing complete sorting algorithm in bash and in some cases it will be faster. It handles all special characters including newlines and should work on most of the common systems. Most importantly, it may teach you something new and awesome about bash :).
Keep it simple ;)
In the following example, the array b is the sorted version of the array a!
The second line echos each item of the array a, then pipes them to the sort command, and the output is used to initiate the array b.
a=(2 3 1)
b=( $( for x in ${a[#]}; do echo $x; done | sort ) )
echo ${b[#]} # output: 1 2 3
min sort:
#!/bin/bash
array=(.....)
index_of_element1=0
while (( ${index_of_element1} < ${#array[#]} )); do
element_1="${array[${index_of_element1}]}"
index_of_element2=$((index_of_element1 + 1))
index_of_min=${index_of_element1}
min_element="${element_1}"
for element_2 in "${array[#]:$((index_of_element1 + 1))}"; do
min_element="`printf "%s\n%s" "${min_element}" "${element_2}" | sort | head -n+1`"
if [[ "${min_element}" == "${element_2}" ]]; then
index_of_min=${index_of_element2}
fi
let index_of_element2++
done
array[${index_of_element1}]="${min_element}"
array[${index_of_min}]="${element_1}"
let index_of_element1++
done
try this:
echo ${array[#]} | awk 'BEGIN{RS=" ";} {print $1}' | sort
Output will be:
3
5
a
b
c
f
Problem solved.
If you can compute a unique integer for each element in the array, like this:
tab='0123456789abcdefghijklmnopqrstuvwxyz'
# build the reversed ordinal map
for ((i = 0; i < ${#tab}; i++)); do
declare -g ord_${tab:i:1}=$i
done
function sexy_int() {
local sum=0
local i ch ref
for ((i = 0; i < ${#1}; i++)); do
ch="${1:i:1}"
ref="ord_$ch"
(( sum += ${!ref} ))
done
return $sum
}
sexy_int hello
echo "hello -> $?"
sexy_int world
echo "world -> $?"
then, you can use these integers as array indexes, because Bash always use sparse array, so no need to worry about unused indexes:
array=(a c b f 3 5)
for el in "${array[#]}"; do
sexy_int "$el"
sorted[$?]="$el"
done
echo "${sorted[#]}"
Pros. Fast.
Cons. Duplicated elements are merged, and it can be impossible to map contents to 32-bit unique integers.
array=(a c b f 3 5)
new_array=($(echo "${array[#]}" | sed 's/ /\n/g' | sort))
echo ${new_array[#]}
echo contents of new_array will be:
3 5 a b c f
There is a workaround for the usual problem of spaces and newlines:
Use a character that is not in the original array (like $'\1' or $'\4' or similar).
This function gets the job done:
# Sort an Array may have spaces or newlines with a workaround (wa=$'\4')
sortarray(){ local wa=$'\4' IFS=''
if [[ $* =~ [$wa] ]]; then
echo "$0: error: array contains the workaround char" >&2
exit 1
fi
set -f; local IFS=$'\n' x nl=$'\n'
set -- $(printf '%s\n' "${#//$nl/$wa}" | sort -n)
for x
do sorted+=("${x//$wa/$nl}")
done
}
This will sort the array:
$ array=( a b 'c d' $'e\nf' $'g\1h')
$ sortarray "${array[#]}"
$ printf '<%s>\n' "${sorted[#]}"
<a>
<b>
<c d>
<e
f>
<gh>
This will complain that the source array contains the workaround character:
$ array=( a b 'c d' $'e\nf' $'g\4h')
$ sortarray "${array[#]}"
./script: error: array contains the workaround char
description
We set two local variables wa (workaround char) and a null IFS
Then (with ifs null) we test that the whole array $*.
Does not contain any woraround char [[ $* =~ [$wa] ]].
If it does, raise a message and signal an error: exit 1
Avoid filename expansions: set -f
Set a new value of IFS (IFS=$'\n') a loop variable x and a newline var (nl=$'\n').
We print all values of the arguments received (the input array $#).
but we replace any new line by the workaround char "${#//$nl/$wa}".
send those values to be sorted sort -n.
and place back all the sorted values in the positional arguments set --.
Then we assign each argument one by one (to preserve newlines).
in a loop for x
to a new array: sorted+=(…)
inside quotes to preserve any existing newline.
restoring the workaround to a newline "${x//$wa/$nl}".
done
This question looks closely related. And BTW, here's a mergesort in Bash (without external processes):
mergesort() {
local -n -r input_reference="$1"
local -n output_reference="$2"
local -r -i size="${#input_reference[#]}"
local merge previous
local -a -i runs indices
local -i index previous_idx merged_idx \
run_a_idx run_a_stop \
run_b_idx run_b_stop
output_reference=("${input_reference[#]}")
if ((size == 0)); then return; fi
previous="${output_reference[0]}"
runs=(0)
for ((index = 0;;)) do
for ((++index;; ++index)); do
if ((index >= size)); then break 2; fi
if [[ "${output_reference[index]}" < "$previous" ]]; then break; fi
previous="${output_reference[index]}"
done
previous="${output_reference[index]}"
runs+=(index)
done
runs+=(size)
while (("${#runs[#]}" > 2)); do
indices=("${!runs[#]}")
merge=("${output_reference[#]}")
for ((index = 0; index < "${#indices[#]}" - 2; index += 2)); do
merged_idx=runs[indices[index]]
run_a_idx=merged_idx
previous_idx=indices[$((index + 1))]
run_a_stop=runs[previous_idx]
run_b_idx=runs[previous_idx]
run_b_stop=runs[indices[$((index + 2))]]
unset runs[previous_idx]
while ((run_a_idx < run_a_stop && run_b_idx < run_b_stop)); do
if [[ "${merge[run_a_idx]}" < "${merge[run_b_idx]}" ]]; then
output_reference[merged_idx++]="${merge[run_a_idx++]}"
else
output_reference[merged_idx++]="${merge[run_b_idx++]}"
fi
done
while ((run_a_idx < run_a_stop)); do
output_reference[merged_idx++]="${merge[run_a_idx++]}"
done
while ((run_b_idx < run_b_stop)); do
output_reference[merged_idx++]="${merge[run_b_idx++]}"
done
done
done
}
declare -ar input=({z..a}{z..a})
declare -a output
mergesort input output
echo "${input[#]}"
echo "${output[#]}"
Many thanks to the people that answered before me. Using their excellent input, bash documentation and ideas from other treads, this is what works perfectly for me without IFS change
array=("a \n c" b f "3 5")
Using process substitution and read array in bash > v4.4 WITH EOL character
readarray -t sorted < <(sort < <(printf '%s\n' "${array[#]}"))
Using process substitution and read array in bash > v4.4 WITH NULL character
readarray -td '' sorted < <(sort -z < <(printf '%s\0' "${array[#]}"))
Finally we verify with
printf "[%s]\n" "${sorted[#]}"
output is
[3 5]
[a \n c]
[b]
[f]
Please, let me know if that is a correct test for embedded /n as both solutions produce the same result, but the first one is not supposed to work properly with embedded /n
I am not convinced that you'll need an external sorting program in Bash.
Here is my implementation for the simple bubble-sort algorithm.
function bubble_sort()
{ #
# Sorts all positional arguments and echoes them back.
#
# Bubble sorting lets the heaviest (longest) element sink to the bottom.
#
local array=($#) max=$(($# - 1))
while ((max > 0))
do
local i=0
while ((i < max))
do
if [ ${array[$i]} \> ${array[$((i + 1))]} ]
then
local t=${array[$i]}
array[$i]=${array[$((i + 1))]}
array[$((i + 1))]=$t
fi
((i += 1))
done
((max -= 1))
done
echo ${array[#]}
}
array=(a c b f 3 5)
echo " input: ${array[#]}"
echo "output: $(bubble_sort ${array[#]})"
This shall print:
input: a c b f 3 5
output: 3 5 a b c f
a=(e b 'c d')
shuf -e "${a[#]}" | sort >/tmp/f
mapfile -t g </tmp/f
Great answers here. Learned a lot. After reading them all, I figure I'd throw my hat into the ring. I think this is the shortest method (and probably faster as it doesn't do much shell script parsing, though there is the matter of the spawning of printf and sort, but they're only called once each) and handles whitespace in the data:
a=(3 "2 a" 1) # Setup!
IFS=$'\n' b=( $(printf "%s\n" "${a[#]}" | sort) ); unset IFS # Sort!
printf "'%s' " "${b[#]}"; # Success!
Outputs:
'1' '2 a' '3'
Note that the IFS change is limited in scope to the line it is on. if you know that the array has no whitespace in it, you don't need the IFS modification.
Inspiration was from #yas's answer and #Alcamtar comments.
EDIT
Oh, I somehow missed the actually accepted answer which is even shorter than mine. Doh!
IFS=$'\n' sorted=($(sort <<<"${array[*]}")); unset IFS
Turns out that the unset is required because this is a variable assignment that has no command.
I'd recommend going to that answer because it has some interesting stuff on globbing which could be relevant if the array has wildcards in it. It also has a detailed description as to what is happening.
EDIT 2
GNU has an extension in which sort delimits records using \0 which is good if you have LFs in your data. However, when it gets returned to the shell to be assign to an array, I don't see a good way convert it so that the shell will delimit on \0, because even setting IFS=$'\0', the shell doesn't like it and doesn't properly break it up.
array=(z 'b c'); { set "${array[#]}"; printf '%s\n' "$#"; } \
| sort \
| mapfile -t array; declare -p array
declare -a array=([0]="b c" [1]="z")
Open an inline function {...} to get a fresh set of positional arguments (e.g. $1, $2, etc).
Copy the array to the positional arguments. (e.g. set "${array[#]}" will copy the nth array argument to the nth positional argument. Note the quotes preserve whitespace that may be contained in an array element).
Print each positional argument (e.g. printf '%s\n' "$#" will print each positional argument on its own line. Again, note the quotes preserve whitespace that may be contained in each positional argument).
Then sort does its thing.
Read the stream into an array with mapfile (e.g. mapfile -t array reads each line into the variable array and the -t ignores the \n in each line).
Dump the array to show its been sorted.
As a function:
set +m
shopt -s lastpipe
sort_array() {
declare -n ref=$1
set "${ref[#]}"
printf '%s\n' "$#"
| sort \
| mapfile -t $ref
}
then
array=(z y x); sort_array array; declare -p array
declare -a array=([0]="x" [1]="y" [2]="z")
I look forward to being ripped apart by all the UNIX gurus! :)
sorted=($(echo ${array[#]} | tr " " "\n" | sort))
In the spirit of bash / linux, I would pipe the best command-line tool for each step. sort does the main job but needs input separated by newline instead of space, so the very simple pipeline above simply does:
Echo array content --> replace space by newline --> sort
$() is to echo the result
($()) is to put the "echoed result" in an array
Note: as #sorontar mentioned in a comment to a different question:
The sorted=($(...)) part is using the "split and glob" operator. You should turn glob off: set -f or set -o noglob or shopt -op noglob or an element of the array like * will be expanded to a list of files.
I'm taking over a bash script from a colleague that reads a file, process it and print another file based on the line in the while loop at the moment.
I now need to append some features to it. The one I'm having issues with right now is to read a file and put each line into an array, except the 2nd column of that line can be empty, e.g.:
For a text file with \t as separator:
A\tB\tC
A\t\tC
For a CSV file same but with , as separator:
A,B,C
A,,C
Which should then give
["A","B","C"] or ["A", "", "C"]
The code I took over is as follow:
while IFS=$'\t\r' read -r -a col; do
# Process the array, put that into a file
lp -d $printer $file_to_print
done < $input_file
Which works if B is filled, but B need to be empty now sometimes, so when the input files keeps it empty, the created array and thus the output file to print just skips this empty cell (array is then ["A","C"]).
I tried writing the whole bloc on awk but this brought it's own sets of problems, making it difficult to call the lp command to print.
So my question is, how can I preserve the empty cell from the line into my bash array, so that I can call on it later and use it?
Thank you very much. I know this might be quite confused so please ask and I'll specify.
Edit: After request, here's the awk code I've tried. The issue here is that it only prints the last print request, while I know it loops over the whole file, and the lp command is still in the loop.
awk 'BEGIN {
inputfile="'"${optfile}"'"
outputfile="'"${file_loc}"'"
printer="'"${printer}"'"
while (getline < inputfile){
print "'"${prefix}"'" > outputfile
split($0,ft,"'"${IFSseps}"'");
if (length(ft[2]) == 0){
print "CODEPAGE 1252\nTEXT 465,191,\"ROMAN.TTF\",180,7,7,\""ft[1]"\"" >> outputfile
size_changer = 0
} else {
print "CODEPAGE 1252\nTEXT 465,191,\"ROMAN.TTF\",180,7,7,\""ft[1]"_"ft[2]"\"" >> outputfile
size_changer = 1
}
if ( split($0,ft,"'"${IFSseps}"'") > 6)
maxcounter = 6;
else
maxcounter = split($0,ft,"'"${IFSseps}"'");
for (i = 3; i <= maxcounter; i++){
x=191-(i-2)*33
print "CODEPAGE 1252\nTEXT 465,"x",\"ROMAN.TTF\",180,7,7,\""ft[i]"\"" >> outputfile
}
print "PRINT ""'"${copies}"'"",1" >> outputfile
close(outputfile)
"'"`lp -d ${printer} ${file_loc}`"'"
}
close("'"${file_loc}"'");
}'
EDIT2: Continuing to try to find a solution to it, I tried following code without success. This is weird, as just doing printf without putting it in an array keeps the formatting intact.
$ cat testinput | tr '\t' '>'
A>B>C
A>>C
# Should normally be empty on the second ouput line
$ while read line; do IFS=$'\t' read -ra col < <(printf "$line"); echo ${col[1]}; done < testinput
B
C
For tab, it's complicated.
From 3.5.7 Word Splitting in the manual:
A sequence of IFS whitespace characters is also treated as a delimiter.
Since tab is an "IFS whitespace character", sequences of tabs are treated as a single delimiter
IFS=$'\t' read -ra ary <<<$'A\t\tC'
declare -p ary
declare -a ary=([0]="A" [1]="C")
What you can do is translate tabs to a non-whitespace character, assuming it does not clash with the actual data in the fields:
line=$'A\t\tC'
IFS=, read -ra ary <<<"${line//$'\t'/,}"
declare -p ary
declare -a ary=([0]="A" [1]="" [2]="C")
To avoid the risk of colliding with commas in the data, we can use an unusual ASCII character: FS, octal 034
line=$'A\t\tC'
printf -v FS '\034'
IFS="$FS" read -ra ary <<<"${line//$'\t'/"$FS"}"
# or, without the placeholder variable
IFS=$'\034' read -ra ary <<<"${line//$'\t'/$'\034'}"
declare -p ary
declare -a ary=([0]="A" [1]="" [2]="C")
One bash example using parameter expansion where we convert the delimiter into a \n and let mapfile read in each line as a new array entry ...
For tab-delimited data:
for line in $'A\tB\tC' $'A\t\tC'
do
mapfile -t array <<< "${line//$'\t'/$'\n'}"
echo "############# ${line}"
typeset -p array
done
############# A B C
declare -a array=([0]="A" [1]="B" [2]="C")
############# A C
declare -a array=([0]="A" [1]="" [2]="C")
NOTE: The $'...' construct insures the \t is treated as a single <tab> character as opposed to the two literal characters \ + t.
For comma-delimited data:
for line in 'A,B,C' 'A,,C'
do
mapfile -t array <<< "${line//,/$'\n'}"
echo "############# ${line}"
typeset -p array
done
############# A,B,C
declare -a array=([0]="A" [1]="B" [2]="C")
############# A,,C
declare -a array=([0]="A" [1]="" [2]="C")
NOTE: This obviously (?) assumes the desired data does not contain a comma (,).
It may just be your # Process the array, put that into a file part.
IFS=, read -ra ray <<< "A,,C"
for e in "${ray[#]}"; do o="$o\"$e\","; done
echo "[${o%,}]"
["A","","C"]
See #Glenn's excellent answer regarding tabs.
My simple data file:
$: cat x # tab delimited, empty field 2 of line 2
a b c
d f
My test:
while IFS=$'\001' read -r a b c; do
echo "a:[$a] b:[$b] c:[$c]"
done < <(tr "\t" "\001"<x)
a:[a] b:[b] c:[c]
a:[d] b:[] c:[f]
Note that I used ^A (a 001 byte) but you might be able to use something as simple as a comma or pipe (|) character. Choose based on your data.
In a Bash script, I would like to split a line into pieces and store them in an array.
For example, given the line:
Paris, France, Europe
I would like to have the resulting array to look like so:
array[0] = Paris
array[1] = France
array[2] = Europe
A simple implementation is preferable; speed does not matter. How can I do it?
IFS=', ' read -r -a array <<< "$string"
Note that the characters in $IFS are treated individually as separators so that in this case fields may be separated by either a comma or a space rather than the sequence of the two characters. Interestingly though, empty fields aren't created when comma-space appears in the input because the space is treated specially.
To access an individual element:
echo "${array[0]}"
To iterate over the elements:
for element in "${array[#]}"
do
echo "$element"
done
To get both the index and the value:
for index in "${!array[#]}"
do
echo "$index ${array[index]}"
done
The last example is useful because Bash arrays are sparse. In other words, you can delete an element or add an element and then the indices are not contiguous.
unset "array[1]"
array[42]=Earth
To get the number of elements in an array:
echo "${#array[#]}"
As mentioned above, arrays can be sparse so you shouldn't use the length to get the last element. Here's how you can in Bash 4.2 and later:
echo "${array[-1]}"
in any version of Bash (from somewhere after 2.05b):
echo "${array[#]: -1:1}"
Larger negative offsets select farther from the end of the array. Note the space before the minus sign in the older form. It is required.
All of the answers to this question are wrong in one way or another.
Wrong answer #1
IFS=', ' read -r -a array <<< "$string"
1: This is a misuse of $IFS. The value of the $IFS variable is not taken as a single variable-length string separator, rather it is taken as a set of single-character string separators, where each field that read splits off from the input line can be terminated by any character in the set (comma or space, in this example).
Actually, for the real sticklers out there, the full meaning of $IFS is slightly more involved. From the bash manual:
The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters <space>, <tab>, and <newline> are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.
Basically, for non-default non-null values of $IFS, fields can be separated with either (1) a sequence of one or more characters that are all from the set of "IFS whitespace characters" (that is, whichever of <space>, <tab>, and <newline> ("newline" meaning line feed (LF)) are present anywhere in $IFS), or (2) any non-"IFS whitespace character" that's present in $IFS along with whatever "IFS whitespace characters" surround it in the input line.
For the OP, it's possible that the second separation mode I described in the previous paragraph is exactly what he wants for his input string, but we can be pretty confident that the first separation mode I described is not correct at all. For example, what if his input string was 'Los Angeles, United States, North America'?
IFS=', ' read -ra a <<<'Los Angeles, United States, North America'; declare -p a;
## declare -a a=([0]="Los" [1]="Angeles" [2]="United" [3]="States" [4]="North" [5]="America")
2: Even if you were to use this solution with a single-character separator (such as a comma by itself, that is, with no following space or other baggage), if the value of the $string variable happens to contain any LFs, then read will stop processing once it encounters the first LF. The read builtin only processes one line per invocation. This is true even if you are piping or redirecting input only to the read statement, as we are doing in this example with the here-string mechanism, and thus unprocessed input is guaranteed to be lost. The code that powers the read builtin has no knowledge of the data flow within its containing command structure.
You could argue that this is unlikely to cause a problem, but still, it's a subtle hazard that should be avoided if possible. It is caused by the fact that the read builtin actually does two levels of input splitting: first into lines, then into fields. Since the OP only wants one level of splitting, this usage of the read builtin is not appropriate, and we should avoid it.
3: A non-obvious potential issue with this solution is that read always drops the trailing field if it is empty, although it preserves empty fields otherwise. Here's a demo:
string=', , a, , b, c, , , '; IFS=', ' read -ra a <<<"$string"; declare -p a;
## declare -a a=([0]="" [1]="" [2]="a" [3]="" [4]="b" [5]="c" [6]="" [7]="")
Maybe the OP wouldn't care about this, but it's still a limitation worth knowing about. It reduces the robustness and generality of the solution.
This problem can be solved by appending a dummy trailing delimiter to the input string just prior to feeding it to read, as I will demonstrate later.
Wrong answer #2
string="1:2:3:4:5"
set -f # avoid globbing (expansion of *).
array=(${string//:/ })
Similar idea:
t="one,two,three"
a=($(echo $t | tr ',' "\n"))
(Note: I added the missing parentheses around the command substitution which the answerer seems to have omitted.)
Similar idea:
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
These solutions leverage word splitting in an array assignment to split the string into fields. Funnily enough, just like read, general word splitting also uses the $IFS special variable, although in this case it is implied that it is set to its default value of <space><tab><newline>, and therefore any sequence of one or more IFS characters (which are all whitespace characters now) is considered to be a field delimiter.
This solves the problem of two levels of splitting committed by read, since word splitting by itself constitutes only one level of splitting. But just as before, the problem here is that the individual fields in the input string can already contain $IFS characters, and thus they would be improperly split during the word splitting operation. This happens to not be the case for any of the sample input strings provided by these answerers (how convenient...), but of course that doesn't change the fact that any code base that used this idiom would then run the risk of blowing up if this assumption were ever violated at some point down the line. Once again, consider my counterexample of 'Los Angeles, United States, North America' (or 'Los Angeles:United States:North America').
Also, word splitting is normally followed by filename expansion (aka pathname expansion aka globbing), which, if done, would potentially corrupt words containing the characters *, ?, or [ followed by ] (and, if extglob is set, parenthesized fragments preceded by ?, *, +, #, or !) by matching them against file system objects and expanding the words ("globs") accordingly. The first of these three answerers has cleverly undercut this problem by running set -f beforehand to disable globbing. Technically this works (although you should probably add set +f afterward to reenable globbing for subsequent code which may depend on it), but it's undesirable to have to mess with global shell settings in order to hack a basic string-to-array parsing operation in local code.
Another issue with this answer is that all empty fields will be lost. This may or may not be a problem, depending on the application.
Note: If you're going to use this solution, it's better to use the ${string//:/ } "pattern substitution" form of parameter expansion, rather than going to the trouble of invoking a command substitution (which forks the shell), starting up a pipeline, and running an external executable (tr or sed), since parameter expansion is purely a shell-internal operation. (Also, for the tr and sed solutions, the input variable should be double-quoted inside the command substitution; otherwise word splitting would take effect in the echo command and potentially mess with the field values. Also, the $(...) form of command substitution is preferable to the old `...` form since it simplifies nesting of command substitutions and allows for better syntax highlighting by text editors.)
Wrong answer #3
str="a, b, c, d" # assuming there is a space after ',' as in Q
arr=(${str//,/}) # delete all occurrences of ','
This answer is almost the same as #2. The difference is that the answerer has made the assumption that the fields are delimited by two characters, one of which being represented in the default $IFS, and the other not. He has solved this rather specific case by removing the non-IFS-represented character using a pattern substitution expansion and then using word splitting to split the fields on the surviving IFS-represented delimiter character.
This is not a very generic solution. Furthermore, it can be argued that the comma is really the "primary" delimiter character here, and that stripping it and then depending on the space character for field splitting is simply wrong. Once again, consider my counterexample: 'Los Angeles, United States, North America'.
Also, again, filename expansion could corrupt the expanded words, but this can be prevented by temporarily disabling globbing for the assignment with set -f and then set +f.
Also, again, all empty fields will be lost, which may or may not be a problem depending on the application.
Wrong answer #4
string='first line
second line
third line'
oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"
This is similar to #2 and #3 in that it uses word splitting to get the job done, only now the code explicitly sets $IFS to contain only the single-character field delimiter present in the input string. It should be repeated that this cannot work for multicharacter field delimiters such as the OP's comma-space delimiter. But for a single-character delimiter like the LF used in this example, it actually comes close to being perfect. The fields cannot be unintentionally split in the middle as we saw with previous wrong answers, and there is only one level of splitting, as required.
One problem is that filename expansion will corrupt affected words as described earlier, although once again this can be solved by wrapping the critical statement in set -f and set +f.
Another potential problem is that, since LF qualifies as an "IFS whitespace character" as defined earlier, all empty fields will be lost, just as in #2 and #3. This would of course not be a problem if the delimiter happens to be a non-"IFS whitespace character", and depending on the application it may not matter anyway, but it does vitiate the generality of the solution.
So, to sum up, assuming you have a one-character delimiter, and it is either a non-"IFS whitespace character" or you don't care about empty fields, and you wrap the critical statement in set -f and set +f, then this solution works, but otherwise not.
(Also, for information's sake, assigning a LF to a variable in bash can be done more easily with the $'...' syntax, e.g. IFS=$'\n';.)
Wrong answer #5
countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"
Similar idea:
IFS=', ' eval 'array=($string)'
This solution is effectively a cross between #1 (in that it sets $IFS to comma-space) and #2-4 (in that it uses word splitting to split the string into fields). Because of this, it suffers from most of the problems that afflict all of the above wrong answers, sort of like the worst of all worlds.
Also, regarding the second variant, it may seem like the eval call is completely unnecessary, since its argument is a single-quoted string literal, and therefore is statically known. But there's actually a very non-obvious benefit to using eval in this way. Normally, when you run a simple command which consists of a variable assignment only, meaning without an actual command word following it, the assignment takes effect in the shell environment:
IFS=', '; ## changes $IFS in the shell environment
This is true even if the simple command involves multiple variable assignments; again, as long as there's no command word, all variable assignments affect the shell environment:
IFS=', ' array=($countries); ## changes both $IFS and $array in the shell environment
But, if the variable assignment is attached to a command name (I like to call this a "prefix assignment") then it does not affect the shell environment, and instead only affects the environment of the executed command, regardless whether it is a builtin or external:
IFS=', ' :; ## : is a builtin command, the $IFS assignment does not outlive it
IFS=', ' env; ## env is an external command, the $IFS assignment does not outlive it
Relevant quote from the bash manual:
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
It is possible to exploit this feature of variable assignment to change $IFS only temporarily, which allows us to avoid the whole save-and-restore gambit like that which is being done with the $OIFS variable in the first variant. But the challenge we face here is that the command we need to run is itself a mere variable assignment, and hence it would not involve a command word to make the $IFS assignment temporary. You might think to yourself, well why not just add a no-op command word to the statement like the : builtin to make the $IFS assignment temporary? This does not work because it would then make the $array assignment temporary as well:
IFS=', ' array=($countries) :; ## fails; new $array value never escapes the : command
So, we're effectively at an impasse, a bit of a catch-22. But, when eval runs its code, it runs it in the shell environment, as if it was normal, static source code, and therefore we can run the $array assignment inside the eval argument to have it take effect in the shell environment, while the $IFS prefix assignment that is prefixed to the eval command will not outlive the eval command. This is exactly the trick that is being used in the second variant of this solution:
IFS=', ' eval 'array=($string)'; ## $IFS does not outlive the eval command, but $array does
So, as you can see, it's actually quite a clever trick, and accomplishes exactly what is required (at least with respect to assignment effectation) in a rather non-obvious way. I'm actually not against this trick in general, despite the involvement of eval; just be careful to single-quote the argument string to guard against security threats.
But again, because of the "worst of all worlds" agglomeration of problems, this is still a wrong answer to the OP's requirement.
Wrong answer #6
IFS=', '; array=(Paris, France, Europe)
IFS=' ';declare -a array=(Paris France Europe)
Um... what? The OP has a string variable that needs to be parsed into an array. This "answer" starts with the verbatim contents of the input string pasted into an array literal. I guess that's one way to do it.
It looks like the answerer may have assumed that the $IFS variable affects all bash parsing in all contexts, which is not true. From the bash manual:
IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is <space><tab><newline>.
So the $IFS special variable is actually only used in two contexts: (1) word splitting that is performed after expansion (meaning not when parsing bash source code) and (2) for splitting input lines into words by the read builtin.
Let me try to make this clearer. I think it might be good to draw a distinction between parsing and execution. Bash must first parse the source code, which obviously is a parsing event, and then later it executes the code, which is when expansion comes into the picture. Expansion is really an execution event. Furthermore, I take issue with the description of the $IFS variable that I just quoted above; rather than saying that word splitting is performed after expansion, I would say that word splitting is performed during expansion, or, perhaps even more precisely, word splitting is part of the expansion process. The phrase "word splitting" refers only to this step of expansion; it should never be used to refer to the parsing of bash source code, although unfortunately the docs do seem to throw around the words "split" and "words" a lot. Here's a relevant excerpt from the linux.die.net version of the bash manual:
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
You could argue the GNU version of the manual does slightly better, since it opts for the word "tokens" instead of "words" in the first sentence of the Expansion section:
Expansion is performed on the command line after it has been split into tokens.
The important point is, $IFS does not change the way bash parses source code. Parsing of bash source code is actually a very complex process that involves recognition of the various elements of shell grammar, such as command sequences, command lists, pipelines, parameter expansions, arithmetic substitutions, and command substitutions. For the most part, the bash parsing process cannot be altered by user-level actions like variable assignments (actually, there are some minor exceptions to this rule; for example, see the various compatxx shell settings, which can change certain aspects of parsing behavior on-the-fly). The upstream "words"/"tokens" that result from this complex parsing process are then expanded according to the general process of "expansion" as broken down in the above documentation excerpts, where word splitting of the expanded (expanding?) text into downstream words is simply one step of that process. Word splitting only touches text that has been spit out of a preceding expansion step; it does not affect literal text that was parsed right off the source bytestream.
Wrong answer #7
string='first line
second line
third line'
while read -r line; do lines+=("$line"); done <<<"$string"
This is one of the best solutions. Notice that we're back to using read. Didn't I say earlier that read is inappropriate because it performs two levels of splitting, when we only need one? The trick here is that you can call read in such a way that it effectively only does one level of splitting, specifically by splitting off only one field per invocation, which necessitates the cost of having to call it repeatedly in a loop. It's a bit of a sleight of hand, but it works.
But there are problems. First: When you provide at least one NAME argument to read, it automatically ignores leading and trailing whitespace in each field that is split off from the input string. This occurs whether $IFS is set to its default value or not, as described earlier in this post. Now, the OP may not care about this for his specific use-case, and in fact, it may be a desirable feature of the parsing behavior. But not everyone who wants to parse a string into fields will want this. There is a solution, however: A somewhat non-obvious usage of read is to pass zero NAME arguments. In this case, read will store the entire input line that it gets from the input stream in a variable named $REPLY, and, as a bonus, it does not strip leading and trailing whitespace from the value. This is a very robust usage of read which I've exploited frequently in my shell programming career. Here's a demonstration of the difference in behavior:
string=$' a b \n c d \n e f '; ## input string
a=(); while read -r line; do a+=("$line"); done <<<"$string"; declare -p a;
## declare -a a=([0]="a b" [1]="c d" [2]="e f") ## read trimmed surrounding whitespace
a=(); while read -r; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]=" a b " [1]=" c d " [2]=" e f ") ## no trimming
The second issue with this solution is that it does not actually address the case of a custom field separator, such as the OP's comma-space. As before, multicharacter separators are not supported, which is an unfortunate limitation of this solution. We could try to at least split on comma by specifying the separator to the -d option, but look what happens:
string='Paris, France, Europe';
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France")
Predictably, the unaccounted surrounding whitespace got pulled into the field values, and hence this would have to be corrected subsequently through trimming operations (this could also be done directly in the while-loop). But there's another obvious error: Europe is missing! What happened to it? The answer is that read returns a failing return code if it hits end-of-file (in this case we can call it end-of-string) without encountering a final field terminator on the final field. This causes the while-loop to break prematurely and we lose the final field.
Technically this same error afflicted the previous examples as well; the difference there is that the field separator was taken to be LF, which is the default when you don't specify the -d option, and the <<< ("here-string") mechanism automatically appends a LF to the string just before it feeds it as input to the command. Hence, in those cases, we sort of accidentally solved the problem of a dropped final field by unwittingly appending an additional dummy terminator to the input. Let's call this solution the "dummy-terminator" solution. We can apply the dummy-terminator solution manually for any custom delimiter by concatenating it against the input string ourselves when instantiating it in the here-string:
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; declare -p a;
declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")
There, problem solved. Another solution is to only break the while-loop if both (1) read returned failure and (2) $REPLY is empty, meaning read was not able to read any characters prior to hitting end-of-file. Demo:
a=(); while read -rd,|| [[ -n "$REPLY" ]]; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')
This approach also reveals the secretive LF that automatically gets appended to the here-string by the <<< redirection operator. It could of course be stripped off separately through an explicit trimming operation as described a moment ago, but obviously the manual dummy-terminator approach solves it directly, so we could just go with that. The manual dummy-terminator solution is actually quite convenient in that it solves both of these two problems (the dropped-final-field problem and the appended-LF problem) in one go.
So, overall, this is quite a powerful solution. It's only remaining weakness is a lack of support for multicharacter delimiters, which I will address later.
Wrong answer #8
string='first line
second line
third line'
readarray -t lines <<<"$string"
(This is actually from the same post as #7; the answerer provided two solutions in the same post.)
The readarray builtin, which is a synonym for mapfile, is ideal. It's a builtin command which parses a bytestream into an array variable in one shot; no messing with loops, conditionals, substitutions, or anything else. And it doesn't surreptitiously strip any whitespace from the input string. And (if -O is not given) it conveniently clears the target array before assigning to it. But it's still not perfect, hence my criticism of it as a "wrong answer".
First, just to get this out of the way, note that, just like the behavior of read when doing field-parsing, readarray drops the trailing field if it is empty. Again, this is probably not a concern for the OP, but it could be for some use-cases. I'll come back to this in a moment.
Second, as before, it does not support multicharacter delimiters. I'll give a fix for this in a moment as well.
Third, the solution as written does not parse the OP's input string, and in fact, it cannot be used as-is to parse it. I'll expand on this momentarily as well.
For the above reasons, I still consider this to be a "wrong answer" to the OP's question. Below I'll give what I consider to be the right answer.
Right answer
Here's a naïve attempt to make #8 work by just specifying the -d option:
string='Paris, France, Europe';
readarray -td, a <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')
We see the result is identical to the result we got from the double-conditional approach of the looping read solution discussed in #7. We can almost solve this with the manual dummy-terminator trick:
readarray -td, a <<<"$string,"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe" [3]=$'\n')
The problem here is that readarray preserved the trailing field, since the <<< redirection operator appended the LF to the input string, and therefore the trailing field was not empty (otherwise it would've been dropped). We can take care of this by explicitly unsetting the final array element after-the-fact:
readarray -td, a <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")
The only two problems that remain, which are actually related, are (1) the extraneous whitespace that needs to be trimmed, and (2) the lack of support for multicharacter delimiters.
The whitespace could of course be trimmed afterward (for example, see How to trim whitespace from a Bash variable?). But if we can hack a multicharacter delimiter, then that would solve both problems in one shot.
Unfortunately, there's no direct way to get a multicharacter delimiter to work. The best solution I've thought of is to preprocess the input string to replace the multicharacter delimiter with a single-character delimiter that will be guaranteed not to collide with the contents of the input string. The only character that has this guarantee is the NUL byte. This is because, in bash (though not in zsh, incidentally), variables cannot contain the NUL byte. This preprocessing step can be done inline in a process substitution. Here's how to do it using awk:
readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; }' <<<"$string, "); unset 'a[-1]';
declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")
There, finally! This solution will not erroneously split fields in the middle, will not cut out prematurely, will not drop empty fields, will not corrupt itself on filename expansions, will not automatically strip leading and trailing whitespace, will not leave a stowaway LF on the end, does not require loops, and does not settle for a single-character delimiter.
Trimming solution
Lastly, I wanted to demonstrate my own fairly intricate trimming solution using the obscure -C callback option of readarray. Unfortunately, I've run out of room against Stack Overflow's draconian 30,000 character post limit, so I won't be able to explain it. I'll leave that as an exercise for the reader.
function mfcb { local val="$4"; "$1"; eval "$2[$3]=\$val;"; };
function val_ltrim { if [[ "$val" =~ ^[[:space:]]+ ]]; then val="${val:${#BASH_REMATCH[0]}}"; fi; };
function val_rtrim { if [[ "$val" =~ [[:space:]]+$ ]]; then val="${val:0:${#val}-${#BASH_REMATCH[0]}}"; fi; };
function val_trim { val_ltrim; val_rtrim; };
readarray -c1 -C 'mfcb val_trim a' -td, <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")
Here is a way without setting IFS:
string="1:2:3:4:5"
set -f # avoid globbing (expansion of *).
array=(${string//:/ })
for i in "${!array[#]}"
do
echo "$i=>${array[i]}"
done
The idea is using string replacement:
${string//substring/replacement}
to replace all matches of $substring with white space and then using the substituted string to initialize a array:
(element1 element2 ... elementN)
Note: this answer makes use of the split+glob operator. Thus, to prevent expansion of some characters (such as *) it is a good idea to pause globbing for this script.
t="one,two,three"
a=($(echo "$t" | tr ',' '\n'))
echo "${a[2]}"
Prints three
Sometimes it happened to me that the method described in the accepted answer didn't work, especially if the separator is a carriage return.
In those cases I solved in this way:
string='first line
second line
third line'
oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"
for line in "${lines[#]}"
do
echo "--> $line"
done
The accepted answer works for values in one line. If the variable has several lines:
string='first line
second line
third line'
We need a very different command to get all lines:
while read -r line; do lines+=("$line"); done <<<"$string"
Or the much simpler bash readarray:
readarray -t lines <<<"$string"
Printing all lines is very easy taking advantage of a printf feature:
printf ">[%s]\n" "${lines[#]}"
>[first line]
>[ second line]
>[ third line]
if you use macOS and can't use readarray, you can simply do this-
MY_STRING="string1 string2 string3"
array=($MY_STRING)
To iterate over the elements:
for element in "${array[#]}"
do
echo $element
done
This works for me on OSX:
string="1 2 3 4 5"
declare -a array=($string)
If your string has different delimiter, just 1st replace those with space:
string="1,2,3,4,5"
delimiter=","
declare -a array=($(echo $string | tr "$delimiter" " "))
Simple :-)
This is similar to the approach by Jmoney38, but using sed:
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
echo ${array[0]}
Prints 1
The key to splitting your string into an array is the multi character delimiter of ", ". Any solution using IFS for multi character delimiters is inherently wrong since IFS is a set of those characters, not a string.
If you assign IFS=", " then the string will break on EITHER "," OR " " or any combination of them which is not an accurate representation of the two character delimiter of ", ".
You can use awk or sed to split the string, with process substitution:
#!/bin/bash
str="Paris, France, Europe"
array=()
while read -r -d $'\0' each; do # use a NUL terminated field separator
array+=("$each")
done < <(printf "%s" "$str" | awk '{ gsub(/,[ ]+|$/,"\0"); print }')
declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output
It is more efficient to use a regex you directly in Bash:
#!/bin/bash
str="Paris, France, Europe"
array=()
while [[ $str =~ ([^,]+)(,[ ]+|$) ]]; do
array+=("${BASH_REMATCH[1]}") # capture the field
i=${#BASH_REMATCH} # length of field + delimiter
str=${str:i} # advance the string by that length
done # the loop deletes $str, so make a copy if needed
declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output...
With the second form, there is no sub shell and it will be inherently faster.
Edit by bgoldst: Here are some benchmarks comparing my readarray solution to dawg's regex solution, and I also included the read solution for the heck of it (note: I slightly modified the regex solution for greater harmony with my solution) (also see my comments below the post):
## competitors
function c_readarray { readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); unset 'a[-1]'; };
function c_read { a=(); local REPLY=''; while read -r -d ''; do a+=("$REPLY"); done < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); };
function c_regex { a=(); local s="$1, "; while [[ $s =~ ([^,]+),\ ]]; do a+=("${BASH_REMATCH[1]}"); s=${s:${#BASH_REMATCH}}; done; };
## helper functions
function rep {
local -i i=-1;
for ((i = 0; i<$1; ++i)); do
printf %s "$2";
done;
}; ## end rep()
function testAll {
local funcs=();
local args=();
local func='';
local -i rc=-1;
while [[ "$1" != ':' ]]; do
func="$1";
if [[ ! "$func" =~ ^[_a-zA-Z][_a-zA-Z0-9]*$ ]]; then
echo "bad function name: $func" >&2;
return 2;
fi;
funcs+=("$func");
shift;
done;
shift;
args=("$#");
for func in "${funcs[#]}"; do
echo -n "$func ";
{ time $func "${args[#]}" >/dev/null 2>&1; } 2>&1| tr '\n' '/';
rc=${PIPESTATUS[0]}; if [[ $rc -ne 0 ]]; then echo "[$rc]"; else echo; fi;
done| column -ts/;
}; ## end testAll()
function makeStringToSplit {
local -i n=$1; ## number of fields
if [[ $n -lt 0 ]]; then echo "bad field count: $n" >&2; return 2; fi;
if [[ $n -eq 0 ]]; then
echo;
elif [[ $n -eq 1 ]]; then
echo 'first field';
elif [[ "$n" -eq 2 ]]; then
echo 'first field, last field';
else
echo "first field, $(rep $[$1-2] 'mid field, ')last field";
fi;
}; ## end makeStringToSplit()
function testAll_splitIntoArray {
local -i n=$1; ## number of fields in input string
local s='';
echo "===== $n field$(if [[ $n -ne 1 ]]; then echo 's'; fi;) =====";
s="$(makeStringToSplit "$n")";
testAll c_readarray c_read c_regex : "$s";
}; ## end testAll_splitIntoArray()
## results
testAll_splitIntoArray 1;
## ===== 1 field =====
## c_readarray real 0m0.067s user 0m0.000s sys 0m0.000s
## c_read real 0m0.064s user 0m0.000s sys 0m0.000s
## c_regex real 0m0.000s user 0m0.000s sys 0m0.000s
##
testAll_splitIntoArray 10;
## ===== 10 fields =====
## c_readarray real 0m0.067s user 0m0.000s sys 0m0.000s
## c_read real 0m0.064s user 0m0.000s sys 0m0.000s
## c_regex real 0m0.001s user 0m0.000s sys 0m0.000s
##
testAll_splitIntoArray 100;
## ===== 100 fields =====
## c_readarray real 0m0.069s user 0m0.000s sys 0m0.062s
## c_read real 0m0.065s user 0m0.000s sys 0m0.046s
## c_regex real 0m0.005s user 0m0.000s sys 0m0.000s
##
testAll_splitIntoArray 1000;
## ===== 1000 fields =====
## c_readarray real 0m0.084s user 0m0.031s sys 0m0.077s
## c_read real 0m0.092s user 0m0.031s sys 0m0.046s
## c_regex real 0m0.125s user 0m0.125s sys 0m0.000s
##
testAll_splitIntoArray 10000;
## ===== 10000 fields =====
## c_readarray real 0m0.209s user 0m0.093s sys 0m0.108s
## c_read real 0m0.333s user 0m0.234s sys 0m0.109s
## c_regex real 0m9.095s user 0m9.078s sys 0m0.000s
##
testAll_splitIntoArray 100000;
## ===== 100000 fields =====
## c_readarray real 0m1.460s user 0m0.326s sys 0m1.124s
## c_read real 0m2.780s user 0m1.686s sys 0m1.092s
## c_regex real 17m38.208s user 15m16.359s sys 2m19.375s
##
enter code herePure bash multi-character delimiter solution.
As others have pointed out in this thread, the OP's question gave an example of a comma delimited string to be parsed into an array, but did not indicate if he/she was only interested in comma delimiters, single character delimiters, or multi-character delimiters.
Since Google tends to rank this answer at or near the top of search results, I wanted to provide readers with a strong answer to the question of multiple character delimiters, since that is also mentioned in at least one response.
If you're in search of a solution to a multi-character delimiter problem, I suggest reviewing Mallikarjun M's post, in particular the response from gniourf_gniourf
who provides this elegant pure BASH solution using parameter expansion:
#!/bin/bash
str="LearnABCtoABCSplitABCaABCString"
delimiter=ABC
s=$str$delimiter
array=();
while [[ $s ]]; do
array+=( "${s%%"$delimiter"*}" );
s=${s#*"$delimiter"};
done;
declare -p array
Link to cited comment/referenced post
Link to cited question: Howto split a string on a multi-character delimiter in bash?
Update 3 Aug 2022
xebeche raised a good point in comments below. After reviewing their suggested edits, I've revised the script provided by gniourf_gniourf, and added remarks for ease of understanding what the script is doing. I also changed the double brackets [[]] to single brackets, for greater compatibility since many SHell variants do not support double bracket notation. In this case, for BaSH, the logic works inside single or double brackets.
#!/bin/bash
str="LearnABCtoABCSplitABCABCaABCStringABC"
delimiter="ABC"
array=()
while [ "$str" ]; do
# parse next sub-string, left of next delimiter
substring="${str%%"$delimiter"*}"
# when substring = delimiter, truncate leading delimiter
# (i.e. pattern is "$delimiter$delimiter")
[ -z "$substring" ] && str="${str#"$delimiter"}" && continue
# create next array element with parsed substring
array+=( "$substring" )
# remaining string to the right of delimiter becomes next string to be evaluated
str="${str:${#substring}}"
# prevent infinite loop when last substring = delimiter
[ "$str" == "$delimiter" ] && break
done
declare -p array
Without the comments:
#!/bin/bash
str="LearnABCtoABCSplitABCABCaABCStringABC"
delimiter="ABC"
array=()
while [ "$str" ]; do
substring="${str%%"$delimiter"*}"
[ -z "$substring" ] && str="${str#"$delimiter"}" && continue
array+=( "$substring" )
str="${str:${#substring}}"
[ "$str" == "$delimiter" ] && break
done
declare -p array
I was curious about the relative performance of the "Right answer"
in the popular answer by #bgoldst, with its apparent decrying of loops,
so I have done a simple benchmark of it against three pure bash implementations.
In summary, I suggest:
for string length < 4k or so, pure bash is faster than gawk
for delimiter length < 10 and string length < 256k, pure bash is comparable to gawk
for delimiter length >> 10 and string length < 64k or so, pure bash is "acceptable";
and gawk is less than 5x faster
for string length < 512k or so, gawk is "acceptable"
I arbitrarily define "acceptable" as "takes < 0.5s to split the string".
I am taking the problem to be to take a bash string and split it into a bash array, using an arbitrary-length delimiter string (not regex).
# in: $1=delim, $2=string
# out: sets array a
My pure bash implementations are:
# naive approach - slow
split_byStr_bash_naive(){
a=()
local prev=""
local cdr="$2"
[[ -z "${cdr}" ]] && a+=("")
while [[ "$cdr" != "$prev" ]]; do
prev="$cdr"
a+=( "${cdr%%"$1"*}" )
cdr="${cdr#*"$1"}"
done
# echo $( declare -p a | md5sum; declare -p a )
}
# use lengths wherever possible - faster
split_byStr_bash_faster(){
a=()
local car=""
local cdr="$2"
while
car="${cdr%%"$1"*}"
a+=("$car")
cdr="${cdr:${#car}}"
(( ${#cdr} ))
do
cdr="${cdr:${#1}}"
done
# echo $( declare -p a | md5sum; declare -p a )
}
# use pattern substitution and readarray - fastest
split_byStr_bash_sub(){
a=()
local delim="$1" string="$2"
delim="${delim//=/=-}"
delim="${delim//$'\n'/=n}"
string="${string//=/=-}"
string="${string//$'\n'/=n}"
readarray -td $'\n' a <<<"${string//"$delim"/$'\n'}"
local len=${#a[#]} i s
for (( i=0; i<len; i++ )); do
s="${a[$i]//=n/$'\n'}"
a[$i]="${s//=-/=}"
done
# echo $( declare -p a | md5sum; declare -p a )
}
The initial -z test in in the naive version handles the case of a zero-length
string being passed. Without the test, the output array is empty;
with it, the array has a single zero-length element.
Replacing readarray with while read gives < 10% slowdown.
This is the gawk implementation I used:
split_byRE_gawk(){
readarray -td '' a < <(awk '{gsub(/'"$1"'/,"\0")}1' <<<"$2$1")
unset 'a[-1]'
# echo $( declare -p a | md5sum; declare -p a )
}
Obviously, in the general case, the delim argument will need to be sanitised,
as gawk expects a regex, and gawk-special characters could cause problems.
Also, as-is, the implementation won't correctly handle newlines in the delimiter.
Since gawk is being used, a generalised version that handles more arbitrary
delimiters could be:
split_byREorStr_gawk(){
local delim=$1
local string=$2
local useRegex=${3:+1} # if set, delimiter is regex
readarray -td '' a < <(
export delim
gawk -v re="$useRegex" '
BEGIN {
RS = FS = "\0"
ORS = ""
d = ENVIRON["delim"]
# cf. https://stackoverflow.com/a/37039138
if (!re) gsub(/[\\.^$(){}\[\]|*+?]/,"\\\\&",d)
}
gsub(d"|\n$","\0")
' <<<"$string"
)
# echo $( declare -p a | md5sum; declare -p a )
}
or the same idea in Perl:
split_byREorStr_perl(){
local delim=$1
local string=$2
local regex=$3 # if set, delimiter is regex
readarray -td '' a < <(
export delim regex
perl -0777pe '
$d = $ENV{delim};
$d = "\Q$d\E" if ! $ENV{regex};
s/$d|\n$/\0/g;
' <<<"$string"
)
# echo $( declare -p a | md5sum; declare -p a )
}
The implementations produce identical output, tested by comparing md5sum separately.
Note that if input had been ambiguous ("logically incorrect" as #bgoldst puts it),
behaviour would diverge slightly. For example, with delimiter -- and string a- or a---:
#goldst's code returns: declare -a a=([0]="a") or declare -a a=([0]="a" [1]="")
mine return: declare -a a=([0]="a-") or declare -a a=([0]="a" [1]="-")
Arguments were derived with simple Perl scripts from:
delim="-=-="
base="ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"
Here are the tables of timing results (in seconds) for 3 different types
of string and delimiter argument.
#s - length of string argument
#d - length of delim argument
= - performance break-even point
! - "acceptable" performance limit (bash) is somewhere around here
!! - "acceptable" performance limit (gawk) is somewhere around here
- - function took too long
<!> - gawk command failed to run
Type 1
d=$(perl -e "print( '$delim' x (7*2**$n) )")
s=$(perl -e "print( '$delim' x (7*2**$n) . '$base' x (7*2**$n) )")
n
#s
#d
gawk
b_sub
b_faster
b_naive
0
252
28
0.002
0.000
0.000
0.000
1
504
56
0.005
0.000
0.000
0.001
2
1008
112
0.005
0.001
0.000
0.003
3
2016
224
0.006
0.001
0.000
0.009
4
4032
448
0.007
0.002
0.001
0.048
=
5
8064
896
0.014
0.008
0.005
0.377
6
16128
1792
0.018
0.029
0.017
(2.214)
7
32256
3584
0.033
0.057
0.039
(15.16)
!
8
64512
7168
0.063
0.214
0.128
-
9
129024
14336
0.111
(0.826)
(0.602)
-
10
258048
28672
0.214
(3.383)
(2.652)
-
!!
11
516096
57344
0.430
(13.46)
(11.00)
-
12
1032192
114688
(0.834)
(58.38)
-
-
13
2064384
229376
<!>
(228.9)
-
-
Type 2
d=$(perl -e "print( '$delim' x ($n) )")
s=$(perl -e "print( ('$delim' x ($n) . '$base' x $n ) x (2**($n-1)) )")
n
#s
#d
gawk
b_sub
b_faster
b_naive
0
0
0
0.003
0.000
0.000
0.000
1
36
4
0.003
0.000
0.000
0.000
2
144
8
0.005
0.000
0.000
0.000
3
432
12
0.005
0.000
0.000
0.000
4
1152
16
0.005
0.001
0.001
0.002
5
2880
20
0.005
0.001
0.002
0.003
6
6912
24
0.006
0.003
0.009
0.014
=
7
16128
28
0.012
0.012
0.037
0.044
8
36864
32
0.023
0.044
0.167
0.187
!
9
82944
36
0.049
0.192
(0.753)
(0.840)
10
184320
40
0.097
(0.925)
(3.682)
(4.016)
11
405504
44
0.204
(4.709)
(18.00)
(19.58)
!!
12
884736
48
0.444
(22.17)
-
-
13
1916928
52
(1.019)
(102.4)
-
-
Type 3
d=$(perl -e "print( '$delim' x (2**($n-1)) )")
s=$(perl -e "print( ('$delim' x (2**($n-1)) . '$base' x (2**($n-1)) ) x ($n) )")
n
#s
#d
gawk
b_sub
b_faster
b_naive
0
0
0
0.000
0.000
0.000
0.000
1
36
4
0.004
0.000
0.000
0.000
2
144
8
0.003
0.000
0.000
0.000
3
432
16
0.003
0.000
0.000
0.000
4
1152
32
0.005
0.001
0.001
0.002
5
2880
64
0.005
0.002
0.001
0.003
6
6912
128
0.006
0.003
0.003
0.014
=
7
16128
256
0.012
0.011
0.010
0.077
8
36864
512
0.023
0.046
0.046
(0.513)
!
9
82944
1024
0.049
0.195
0.197
(3.850)
10
184320
2048
0.103
(0.951)
(1.061)
(31.84)
11
405504
4096
0.222
(4.796)
-
-
!!
12
884736
8192
0.473
(22.88)
-
-
13
1916928
16384
(1.126)
(105.4)
-
-
Summary of delimiters length 1..10
As short delimiters are probably more likely than long,
summarised below are the results of varying delimiter length
between 1 and 10 (results for 2..9 mostly elided as very similar).
s1=$(perl -e "print( '$d' . '$base' x (7*2**$n) )")
s2=$(perl -e "print( ('$d' . '$base' x $n ) x (2**($n-1)) )")
s3=$(perl -e "print( ('$d' . '$base' x (2**($n-1)) ) x ($n) )")
bash_sub < gawk
string
n
#s
#d
gawk
b_sub
b_faster
b_naive
s1
10
229377
1
0.131
0.089
1.709
-
s1
10
229386
10
0.142
0.095
1.907
-
s2
8
32896
1
0.022
0.007
0.148
0.168
s2
8
34048
10
0.021
0.021
0.163
0.179
s3
12
786444
1
0.436
0.468
-
-
s3
12
786456
2
0.434
0.317
-
-
s3
12
786552
10
0.438
0.333
-
-
bash_sub < 0.5s
string
n
#s
#d
gawk
b_sub
b_faster
b_naive
s1
11
458753
1
0.256
0.332
(7.089)
-
s1
11
458762
10
0.269
0.387
(8.003)
-
s2
11
361472
1
0.205
0.283
(14.54)
-
s2
11
363520
3
0.207
0.462
(16.66)
-
s3
12
786444
1
0.436
0.468
-
-
s3
12
786456
2
0.434
0.317
-
-
s3
12
786552
10
0.438
0.333
-
-
gawk < 0.5s
string
n
#s
$d
gawk
b_sub
b_faster
b_naive
s1
11
458753
1
0.256
0.332
(7.089)
-
s1
11
458762
10
0.269
0.387
(8.003)
-
s2
12
788480
1
0.440
(1.252)
-
-
s2
12
806912
10
0.449
(4.968)
-
-
s3
12
786444
1
0.436
0.468
-
-
s3
12
786456
2
0.434
0.317
-
-
s3
12
786552
10
0.438
0.333
-
-
(I'm not entirely sure why bash_sub with s>160k and d=1 was consistently slower than d>1 for s3.)
All tests carried out with bash 5.0.17 on an Intel i7-7500U running xubuntu 20.04.
Try this
IFS=', '; array=(Paris, France, Europe)
for item in ${array[#]}; do echo $item; done
It's simple. If you want, you can also add a declare (and also remove the commas):
IFS=' ';declare -a array=(Paris France Europe)
The IFS is added to undo the above but it works without it in a fresh bash instance
#!/bin/bash
string="a | b c"
pattern=' | '
# replaces pattern with newlines
splitted="$(sed "s/$pattern/\n/g" <<< "$string")"
# Reads lines and put them in array
readarray -t array2 <<< "$splitted"
# Prints number of elements
echo ${#array2[#]}
# Prints all elements
for a in "${array2[#]}"; do
echo "> '$a'"
done
This solution works for larger delimiters (more than one char).
Doesn't work if you have a newline already in the original string
This works for the given data:
$ aaa='Paris, France, Europe'
$ mapfile -td ',' aaaa < <(echo -n "${aaa//, /,}")
$ declare -p aaaa
Result:
declare -a aaaa=([0]="Paris" [1]="France" [2]="Europe")
And it will also work for extended data with spaces, such as "New York":
$ aaa="New York, Paris, New Jersey, Hampshire"
$ mapfile -td ',' aaaa < <(echo -n "${aaa//, /,}")
$ declare -p aaaa
Result:
declare -a aaaa=([0]="New York" [1]="Paris" [2]="New Jersey" [3]="Hampshire")
Another way to do it without modifying IFS:
read -r -a myarray <<< "${string//, /$IFS}"
Rather than changing IFS to match our desired delimiter, we can replace all occurrences of our desired delimiter ", " with contents of $IFS via "${string//, /$IFS}".
Maybe this will be slow for very large strings though?
This is based on Dennis Williamson's answer.
I came across this post when looking to parse an input like:
word1,word2,...
none of the above helped me. solved it by using awk. If it helps someone:
STRING="value1,value2,value3"
array=`echo $STRING | awk -F ',' '{ s = $1; for (i = 2; i <= NF; i++) s = s "\n"$i; print s; }'`
for word in ${array}
do
echo "This is the word $word"
done
UPDATE: Don't do this, due to problems with eval.
With slightly less ceremony:
IFS=', ' eval 'array=($string)'
e.g.
string="foo, bar,baz"
IFS=', ' eval 'array=($string)'
echo ${array[1]} # -> bar
Do not change IFS!
Here's a simple bash one-liner:
read -a my_array <<< $(echo ${INPUT_STRING} | tr -d ' ' | tr ',' ' ')
Here's my hack!
Splitting strings by strings is a pretty boring thing to do using bash. What happens is that we have limited approaches that only work in a few cases (split by ";", "/", "." and so on) or we have a variety of side effects in the outputs.
The approach below has required a number of maneuvers, but I believe it will work for most of our needs!
#!/bin/bash
# --------------------------------------
# SPLIT FUNCTION
# ----------------
F_SPLIT_R=()
f_split() {
: 'It does a "split" into a given string and returns an array.
Args:
TARGET_P (str): Target string to "split".
DELIMITER_P (Optional[str]): Delimiter used to "split". If not
informed the split will be done by spaces.
Returns:
F_SPLIT_R (array): Array with the provided string separated by the
informed delimiter.
'
F_SPLIT_R=()
TARGET_P=$1
DELIMITER_P=$2
if [ -z "$DELIMITER_P" ] ; then
DELIMITER_P=" "
fi
REMOVE_N=1
if [ "$DELIMITER_P" == "\n" ] ; then
REMOVE_N=0
fi
# NOTE: This was the only parameter that has been a problem so far!
# By Questor
# [Ref.: https://unix.stackexchange.com/a/390732/61742]
if [ "$DELIMITER_P" == "./" ] ; then
DELIMITER_P="[.]/"
fi
if [ ${REMOVE_N} -eq 1 ] ; then
# NOTE: Due to bash limitations we have some problems getting the
# output of a split by awk inside an array and so we need to use
# "line break" (\n) to succeed. Seen this, we remove the line breaks
# momentarily afterwards we reintegrate them. The problem is that if
# there is a line break in the "string" informed, this line break will
# be lost, that is, it is erroneously removed in the output!
# By Questor
TARGET_P=$(awk 'BEGIN {RS="dn"} {gsub("\n", "3F2C417D448C46918289218B7337FCAF"); printf $0}' <<< "${TARGET_P}")
fi
# NOTE: The replace of "\n" by "3F2C417D448C46918289218B7337FCAF" results
# in more occurrences of "3F2C417D448C46918289218B7337FCAF" than the
# amount of "\n" that there was originally in the string (one more
# occurrence at the end of the string)! We can not explain the reason for
# this side effect. The line below corrects this problem! By Questor
TARGET_P=${TARGET_P%????????????????????????????????}
SPLIT_NOW=$(awk -F"$DELIMITER_P" '{for(i=1; i<=NF; i++){printf "%s\n", $i}}' <<< "${TARGET_P}")
while IFS= read -r LINE_NOW ; do
if [ ${REMOVE_N} -eq 1 ] ; then
# NOTE: We use "'" to prevent blank lines with no other characters
# in the sequence being erroneously removed! We do not know the
# reason for this side effect! By Questor
LN_NOW_WITH_N=$(awk 'BEGIN {RS="dn"} {gsub("3F2C417D448C46918289218B7337FCAF", "\n"); printf $0}' <<< "'${LINE_NOW}'")
# NOTE: We use the commands below to revert the intervention made
# immediately above! By Questor
LN_NOW_WITH_N=${LN_NOW_WITH_N%?}
LN_NOW_WITH_N=${LN_NOW_WITH_N#?}
F_SPLIT_R+=("$LN_NOW_WITH_N")
else
F_SPLIT_R+=("$LINE_NOW")
fi
done <<< "$SPLIT_NOW"
}
# --------------------------------------
# HOW TO USE
# ----------------
STRING_TO_SPLIT="
* How do I list all databases and tables using psql?
\"
sudo -u postgres /usr/pgsql-9.4/bin/psql -c \"\l\"
sudo -u postgres /usr/pgsql-9.4/bin/psql <DB_NAME> -c \"\dt\"
\"
\"
\list or \l: list all databases
\dt: list all tables in the current database
\"
[Ref.: https://dba.stackexchange.com/questions/1285/how-do-i-list-all-databases-and-tables-using-psql]
"
f_split "$STRING_TO_SPLIT" "bin/psql -c"
# --------------------------------------
# OUTPUT AND TEST
# ----------------
ARR_LENGTH=${#F_SPLIT_R[*]}
for (( i=0; i<=$(( $ARR_LENGTH -1 )); i++ )) ; do
echo " > -----------------------------------------"
echo "${F_SPLIT_R[$i]}"
echo " < -----------------------------------------"
done
if [ "$STRING_TO_SPLIT" == "${F_SPLIT_R[0]}bin/psql -c${F_SPLIT_R[1]}" ] ; then
echo " > -----------------------------------------"
echo "The strings are the same!"
echo " < -----------------------------------------"
fi
For multilined elements, why not something like
$ array=($(echo -e $'a a\nb b' | tr ' ' '§')) && array=("${array[#]//§/ }") && echo "${array[#]/%/ INTERELEMENT}"
a a INTERELEMENT b b INTERELEMENT
Since there are so many ways to solve this, let's start by defining what we want to see in our solution.
Bash provides a builtin readarray for this purpose. Let's use it.
Avoid ugly and unnecessary tricks such as changing IFS, looping, using eval, or adding an extra element then removing it.
Find a simple, readable approach that can easily be adapted to similar problems.
The readarray command is easiest to use with newlines as the delimiter. With other delimiters it may add an extra element to the array. The cleanest approach is to first adapt our input into a form that works nicely with readarray before passing it in.
The input in this example does not have a multi-character delimiter. If we apply a little common sense, it's best understood as comma separated input for which each element may need to be trimmed. My solution is to split the input by comma into multiple lines, trim each element, and pass it all to readarray.
string=' Paris,France , All of Europe '
readarray -t foo < <(tr ',' '\n' <<< "$string" |sed 's/^ *//' |sed 's/ *$//')
# Result:
declare -p foo
# declare -a foo='([0]="Paris" [1]="France" [2]="All of Europe")'
EDIT: My solution allows inconsistent spacing around comma separators, while also allowing elements to contain spaces. Few other solutions can handle these special cases.
I also avoid approaches which seem like hacks, such as creating an extra array element and then removing it. If you don't agree it's the best answer here, please leave a comment to explain.
If you'd like to try the same approach purely in Bash and with fewer subshells, it's possible. But the result is harder to read, and this optimization is probably unnecessary.
string=' Paris,France , All of Europe '
foo="${string#"${string%%[![:space:]]*}"}"
foo="${foo%"${foo##*[![:space:]]}"}"
foo="${foo//+([[:space:]]),/,}"
foo="${foo//,+([[:space:]])/,}"
readarray -t foo < <(echo "$foo")
Another way would be:
string="Paris, France, Europe"
IFS=', ' arr=(${string})
Now your elements are stored in "arr" array.
To iterate through the elements:
for i in ${arr[#]}; do echo $i; done
Another approach can be:
str="a, b, c, d" # assuming there is a space after ',' as in Q
arr=(${str//,/}) # delete all occurrences of ','
After this 'arr' is an array with four strings.
This doesn't require dealing IFS or read or any other special stuff hence much simpler and direct.
I'm struggling with a project. I am supposed to write a bash script which will work like tr command. At the beginning I would like to save all commands arguments into separated arrays. And in case if an argument is a word I would like to have each char in separated array field,eg.
tr_mine AB DC
I would like to have two arrays: a[0] = A, a[1] = B and b[0]=C b[1]=D.
I found a way, but it's not working:
IFS="" read -r -a array <<< "$a"
No sed, no awk, all bash internals.
Assuming that words are always separated with blanks (space and/or tabs),
also assuming that words are given as arguments, and writing for bash only:
#!/bin/bash
blank=$'[ \t]'
varname='A'
n=1
while IFS='' read -r -d '' -N 1 c ; do
if [[ $c =~ $blank ]]; then n=$((n+1)); continue; fi
eval ${varname}${n}'+=("'"$c"'")'
done <<<"$#"
last=$(eval echo \${#${varname}${n}[#]}) ### Find last character index.
unset "${varname}${n}[$last-1]" ### Remove last (trailing) newline.
for ((j=1;j<=$n;j++)); do
k="A$j[#]"
printf '<%s> ' "${!k}"; echo
done
That will set each array A1, A2, A3, etc. ... to the letters of each word.
The value at the end of the first loop of $n is the count of words processed.
Printing may be a little tricky, that is why the code to access each letter is given above.
Applied to your sample text:
$ script.sh AB DC
<A> <B>
<D> <C>
The script is setting two (array) vars A1 and A2.
And each letter is one array element: A1[0] = A, A1[1] = B and A2[0]=C, A2[1]=D.
You need to set a variable ($k) to the array element to access.
For example, to echo fourth letter (0 based) of second word (1 based) you need to do (that may be changed if needed):
k="A2[3]"; echo "${!k}" ### Indirect addressing.
The script will work as this:
$ script.sh ABCD efghi
<A> <B> <C> <D>
<e> <f> <g> <h> <i>
Caveat: Characters will be split even if quoted. However, quoted arguments is the correct way to use this script to avoid the effect of shell metacharacters ( |,&,;,(,),<,>,space,tab ). Of course, spaces (even if repeated) will split words as defined by the variable $blank:
$ script.sh $'qwer;rttt fgf\ngfg'
<q> <w> <e> <r> <;> <r> <t> <t> <t>
<>
<>
<>
<f> <g> <f> <
> <g> <f> <g>
As the script will accept and correctly process embebed newlines we need to use: unset "${varname}${n}[$last-1]" to remove the last trailing "newline". If that is not desired, quote the line.
Security Note: The eval is not much of a problem here as it is only processing one character at a time. It would be difficult to create an attack based on just one character. Anyway, the usual warning is valid: Always sanitize your input before using this script. Also, most (not quoted) metacharacters of bash will break this script.
$ script.sh qwer(rttt fgfgfg
bash: syntax error near unexpected token `('
I would strongly suggest to do this in another language if possible, it will be a lot easier.
Now, the closest I come up with is:
#!/bin/bash
sentence="AC DC"
words=`echo "$sentence" | tr " " "\n"`
# final array
declare -A result
# word count
wc=0
for i in $words; do
# letter count in the word
lc=0
for l in `echo "$i" | grep -o .`; do
result["w$wc-l$lc"]=$l
lc=$(($lc+1))
done
wc=$(($wc+1))
done
rLen=${#result[#]}
echo "Result Length $rLen"
for i in "${!result[#]}"
do
echo "$i => ${result[$i]}"
done
The above prints:
Result Length 4
w1-l1 => C
w1-l0 => D
w0-l0 => A
w0-l1 => C
Explanation:
Dynamic variables are not supported in bash (ie create variables using variables) so I am using an associative array instead (result)
Arrays in bash are single dimension. To fake a 2D array I use the indexes: w for words and l for letters. This will make further processing a pain...
Associative arrays are not ordered thus results appear in random order when printing
${!result[#]} is used instead of ${result[#]}. The first iterates keys while the second iterates values
I know this is not exactly what you ask for, but I hope it will point you to the right direction
Try this :
sentence="$#"
read -r -a words <<< "$sentence"
for word in ${words[#]}; do
inc=$(( i++ ))
read -r -a l${inc} <<< $(sed 's/./& /g' <<< $word)
done
echo ${words[1]} # print "CD"
echo ${l1[1]} # print "D"
The first read reads all words, the internal one is for letters.
The sed command add a space after each letters to make the string splittable by read -a. You can also use this sed command to remove unwanted characters from words (eg commas) before splitting.
If special characters are allowed in words, you can use a simple grep instead of the sed command (as suggested in http://www.unixcl.com/2009/07/split-string-to-characters-in-bash.html) :
read -r -a l${inc} <<< $(grep -o . <<< $word)
The word array is ${w}.
The letters arrays are named l# where # is an increment added for each word read.
I have an array in Bash, for example:
array=(a c b f 3 5)
I need to sort the array. Not just displaying the content in a sorted way, but to get a new array with the sorted elements. The new sorted array can be a completely new one or the old one.
You don't really need all that much code:
IFS=$'\n' sorted=($(sort <<<"${array[*]}"))
unset IFS
Supports whitespace in elements (as long as it's not a newline), and works in Bash 3.x.
e.g.:
$ array=("a c" b f "3 5")
$ IFS=$'\n' sorted=($(sort <<<"${array[*]}")); unset IFS
$ printf "[%s]\n" "${sorted[#]}"
[3 5]
[a c]
[b]
[f]
Note: #sorontar has pointed out that care is required if elements contain wildcards such as * or ?:
The sorted=($(...)) part is using the "split and glob" operator. You should turn glob off: set -f or set -o noglob or shopt -op noglob or an element of the array like * will be expanded to a list of files.
What's happening:
The result is a culmination six things that happen in this order:
IFS=$'\n'
"${array[*]}"
<<<
sort
sorted=($(...))
unset IFS
First, the IFS=$'\n'
This is an important part of our operation that affects the outcome of 2 and 5 in the following way:
Given:
"${array[*]}" expands to every element delimited by the first character of IFS
sorted=() creates elements by splitting on every character of IFS
IFS=$'\n' sets things up so that elements are expanded using a new line as the delimiter, and then later created in a way that each line becomes an element. (i.e. Splitting on a new line.)
Delimiting by a new line is important because that's how sort operates (sorting per line). Splitting by only a new line is not-as-important, but is needed preserve elements that contain spaces or tabs.
The default value of IFS is a space, a tab, followed by a new line, and would be unfit for our operation.
Next, the sort <<<"${array[*]}" part
<<<, called here strings, takes the expansion of "${array[*]}", as explained above, and feeds it into the standard input of sort.
With our example, sort is fed this following string:
a c
b
f
3 5
Since sort sorts, it produces:
3 5
a c
b
f
Next, the sorted=($(...)) part
The $(...) part, called command substitution, causes its content (sort <<<"${array[*]}) to run as a normal command, while taking the resulting standard output as the literal that goes where ever $(...) was.
In our example, this produces something similar to simply writing:
sorted=(3 5
a c
b
f
)
sorted then becomes an array that's created by splitting this literal on every new line.
Finally, the unset IFS
This resets the value of IFS to the default value, and is just good practice.
It's to ensure we don't cause trouble with anything that relies on IFS later in our script. (Otherwise we'd need to remember that we've switched things around--something that might be impractical for complex scripts.)
Original response:
array=(a c b "f f" 3 5)
readarray -t sorted < <(for a in "${array[#]}"; do echo "$a"; done | sort)
output:
$ for a in "${sorted[#]}"; do echo "$a"; done
3
5
a
b
c
f f
Note this version copes with values that contains special characters or whitespace (except newlines)
Note readarray is supported in bash 4+.
Edit Based on the suggestion by #Dimitre I had updated it to:
readarray -t sorted < <(printf '%s\0' "${array[#]}" | sort -z | xargs -0n1)
which has the benefit of even understanding sorting elements with newline characters embedded correctly. Unfortunately, as correctly signaled by #ruakh this didn't mean the the result of readarray would be correct, because readarray has no option to use NUL instead of regular newlines as line-separators.
If you don't need to handle special shell characters in the array elements:
array=(a c b f 3 5)
sorted=($(printf '%s\n' "${array[#]}"|sort))
With bash you'll need an external sorting program anyway.
With zsh no external programs are needed and special shell characters are easily handled:
% array=('a a' c b f 3 5); printf '%s\n' "${(o)array[#]}"
3
5
a a
b
c
f
ksh has set -s to sort ASCIIbetically.
Here's a pure Bash quicksort implementation:
#!/bin/bash
# quicksorts positional arguments
# return is in array qsort_ret
qsort() {
local pivot i smaller=() larger=()
qsort_ret=()
(($#==0)) && return 0
pivot=$1
shift
for i; do
# This sorts strings lexicographically.
if [[ $i < $pivot ]]; then
smaller+=( "$i" )
else
larger+=( "$i" )
fi
done
qsort "${smaller[#]}"
smaller=( "${qsort_ret[#]}" )
qsort "${larger[#]}"
larger=( "${qsort_ret[#]}" )
qsort_ret=( "${smaller[#]}" "$pivot" "${larger[#]}" )
}
Use as, e.g.,
$ array=(a c b f 3 5)
$ qsort "${array[#]}"
$ declare -p qsort_ret
declare -a qsort_ret='([0]="3" [1]="5" [2]="a" [3]="b" [4]="c" [5]="f")'
This implementation is recursive… so here's an iterative quicksort:
#!/bin/bash
# quicksorts positional arguments
# return is in array qsort_ret
# Note: iterative, NOT recursive! :)
qsort() {
(($#==0)) && return 0
local stack=( 0 $(($#-1)) ) beg end i pivot smaller larger
qsort_ret=("$#")
while ((${#stack[#]})); do
beg=${stack[0]}
end=${stack[1]}
stack=( "${stack[#]:2}" )
smaller=() larger=()
pivot=${qsort_ret[beg]}
for ((i=beg+1;i<=end;++i)); do
if [[ "${qsort_ret[i]}" < "$pivot" ]]; then
smaller+=( "${qsort_ret[i]}" )
else
larger+=( "${qsort_ret[i]}" )
fi
done
qsort_ret=( "${qsort_ret[#]:0:beg}" "${smaller[#]}" "$pivot" "${larger[#]}" "${qsort_ret[#]:end+1}" )
if ((${#smaller[#]}>=2)); then stack+=( "$beg" "$((beg+${#smaller[#]}-1))" ); fi
if ((${#larger[#]}>=2)); then stack+=( "$((end-${#larger[#]}+1))" "$end" ); fi
done
}
In both cases, you can change the order you use: I used string comparisons, but you can use arithmetic comparisons, compare wrt file modification time, etc. just use the appropriate test; you can even make it more generic and have it use a first argument that is the test function use, e.g.,
#!/bin/bash
# quicksorts positional arguments
# return is in array qsort_ret
# Note: iterative, NOT recursive! :)
# First argument is a function name that takes two arguments and compares them
qsort() {
(($#<=1)) && return 0
local compare_fun=$1
shift
local stack=( 0 $(($#-1)) ) beg end i pivot smaller larger
qsort_ret=("$#")
while ((${#stack[#]})); do
beg=${stack[0]}
end=${stack[1]}
stack=( "${stack[#]:2}" )
smaller=() larger=()
pivot=${qsort_ret[beg]}
for ((i=beg+1;i<=end;++i)); do
if "$compare_fun" "${qsort_ret[i]}" "$pivot"; then
smaller+=( "${qsort_ret[i]}" )
else
larger+=( "${qsort_ret[i]}" )
fi
done
qsort_ret=( "${qsort_ret[#]:0:beg}" "${smaller[#]}" "$pivot" "${larger[#]}" "${qsort_ret[#]:end+1}" )
if ((${#smaller[#]}>=2)); then stack+=( "$beg" "$((beg+${#smaller[#]}-1))" ); fi
if ((${#larger[#]}>=2)); then stack+=( "$((end-${#larger[#]}+1))" "$end" ); fi
done
}
Then you can have this comparison function:
compare_mtime() { [[ $1 -nt $2 ]]; }
and use:
$ qsort compare_mtime *
$ declare -p qsort_ret
to have the files in current folder sorted by modification time (newest first).
NOTE. These functions are pure Bash! no external utilities, and no subshells! they are safe wrt any funny symbols you may have (spaces, newline characters, glob characters, etc.).
NOTE2. The test [[ $i < $pivot ]] is correct. It uses the lexicographical string comparison. If your array only contains integers and you want to sort numerically, use ((i < pivot)) instead.
Please don't edit this answer to change that. It has already been edited (and rolled back) a couple of times. The test I gave here is correct and corresponds to the output given in the example: the example uses both strings and numbers, and the purpose is to sort it in lexicographical order. Using ((i < pivot)) in this case is wrong.
tl;dr:
Sort array a_in and store the result in a_out (elements must not have embedded newlines[1]
):
Bash v4+:
readarray -t a_out < <(printf '%s\n' "${a_in[#]}" | sort)
Bash v3:
IFS=$'\n' read -d '' -r -a a_out < <(printf '%s\n' "${a_in[#]}" | sort)
Advantages over antak's solution:
You needn't worry about accidental globbing (accidental interpretation of the array elements as filename patterns), so no extra command is needed to disable globbing (set -f, and set +f to restore it later).
You needn't worry about resetting IFS with unset IFS.[2]
Optional reading: explanation and sample code
The above combines Bash code with external utility sort for a solution that works with arbitrary single-line elements and either lexical or numerical sorting (optionally by field):
Performance: For around 20 elements or more, this will be faster than a pure Bash solution - significantly and increasingly so once you get beyond around 100 elements.
(The exact thresholds will depend on your specific input, machine, and platform.)
The reason it is fast is that it avoids Bash loops.
printf '%s\n' "${a_in[#]}" | sort performs the sorting (lexically, by default - see sort's POSIX spec):
"${a_in[#]}" safely expands to the elements of array a_in as individual arguments, whatever they contain (including whitespace).
printf '%s\n' then prints each argument - i.e., each array element - on its own line, as-is.
Note the use of a process substitution (<(...)) to provide the sorted output as input to read / readarray (via redirection to stdin, <), because read / readarray must run in the current shell (must not run in a subshell) in order for output variable a_out to be visible to the current shell (for the variable to remain defined in the remainder of the script).
Reading sort's output into an array variable:
Bash v4+: readarray -t a_out reads the individual lines output by sort into the elements of array variable a_out, without including the trailing \n in each element (-t).
Bash v3: readarray doesn't exist, so read must be used:
IFS=$'\n' read -d '' -r -a a_out tells read to read into array (-a) variable a_out, reading the entire input, across lines (-d ''), but splitting it into array elements by newlines (IFS=$'\n'. $'\n', which produces a literal newline (LF), is a so-called ANSI C-quoted string).
(-r, an option that should virtually always be used with read, disables unexpected handling of \ characters.)
Annotated sample code:
#!/usr/bin/env bash
# Define input array `a_in`:
# Note the element with embedded whitespace ('a c')and the element that looks like
# a glob ('*'), chosen to demonstrate that elements with line-internal whitespace
# and glob-like contents are correctly preserved.
a_in=( 'a c' b f 5 '*' 10 )
# Sort and store output in array `a_out`
# Saving back into `a_in` is also an option.
IFS=$'\n' read -d '' -r -a a_out < <(printf '%s\n' "${a_in[#]}" | sort)
# Bash 4.x: use the simpler `readarray -t`:
# readarray -t a_out < <(printf '%s\n' "${a_in[#]}" | sort)
# Print sorted output array, line by line:
printf '%s\n' "${a_out[#]}"
Due to use of sort without options, this yields lexical sorting (digits sort before letters, and digit sequences are treated lexically, not as numbers):
*
10
5
a c
b
f
If you wanted numerical sorting by the 1st field, you'd use sort -k1,1n instead of just sort, which yields (non-numbers sort before numbers, and numbers sort correctly):
*
a c
b
f
5
10
[1] To handle elements with embedded newlines, use the following variant (Bash v4+, with GNU sort):
readarray -d '' -t a_out < <(printf '%s\0' "${a_in[#]}" | sort -z).
Michał Górny's helpful answer has a Bash v3 solution.
[2] While IFS is set in the Bash v3 variant, the change is scoped to the command.
By contrast, what follows IFS=$'\n' in antak's answer is an assignment rather than a command, in which case the IFS change is global.
In the 3-hour train trip from Munich to Frankfurt (which I had trouble to reach because Oktoberfest starts tomorrow) I was thinking about my first post. Employing a global array is a much better idea for a general sort function. The following function handles arbitary strings (newlines, blanks etc.):
declare BSORT=()
function bubble_sort()
{ #
# #param [ARGUMENTS]...
#
# Sort all positional arguments and store them in global array BSORT.
# Without arguments sort this array. Return the number of iterations made.
#
# Bubble sorting lets the heaviest element sink to the bottom.
#
(($# > 0)) && BSORT=("$#")
local j=0 ubound=$((${#BSORT[*]} - 1))
while ((ubound > 0))
do
local i=0
while ((i < ubound))
do
if [ "${BSORT[$i]}" \> "${BSORT[$((i + 1))]}" ]
then
local t="${BSORT[$i]}"
BSORT[$i]="${BSORT[$((i + 1))]}"
BSORT[$((i + 1))]="$t"
fi
((++i))
done
((++j))
((--ubound))
done
echo $j
}
bubble_sort a c b 'z y' 3 5
echo ${BSORT[#]}
This prints:
3 5 a b c z y
The same output is created from
BSORT=(a c b 'z y' 3 5)
bubble_sort
echo ${BSORT[#]}
Note that probably Bash internally uses smart-pointers, so the swap-operation could be cheap (although I doubt it). However, bubble_sort demonstrates that more advanced functions like merge_sort are also in the reach of the shell language.
Another solution that uses external sort and copes with any special characters (except for NULs :)). Should work with bash-3.2 and GNU or BSD sort (sadly, POSIX doesn't include -z).
local e new_array=()
while IFS= read -r -d '' e; do
new_array+=( "${e}" )
done < <(printf "%s\0" "${array[#]}" | LC_ALL=C sort -z)
First look at the input redirection at the end. We're using printf built-in to write out the array elements, zero-terminated. The quoting makes sure array elements are passed as-is, and specifics of shell printf cause it to reuse the last part of format string for each remaining parameter. That is, it's equivalent to something like:
for e in "${array[#]}"; do
printf "%s\0" "${e}"
done
The null-terminated element list is then passed to sort. The -z option causes it to read null-terminated elements, sort them and output null-terminated as well. If you needed to get only the unique elements, you can pass -u since it is more portable than uniq -z. The LC_ALL=C ensures stable sort order independently of locale — sometimes useful for scripts. If you want the sort to respect locale, remove that.
The <() construct obtains the descriptor to read from the spawned pipeline, and < redirects the standard input of the while loop to it. If you need to access the standard input inside the pipe, you may use another descriptor — exercise for the reader :).
Now, back to the beginning. The read built-in reads output from the redirected stdin. Setting empty IFS disables word splitting which is unnecessary here — as a result, read reads the whole 'line' of input to the single provided variable. -r option disables escape processing that is undesired here as well. Finally, -d '' sets the line delimiter to NUL — that is, tells read to read zero-terminated strings.
As a result, the loop is executed once for every successive zero-terminated array element, with the value being stored in e. The example just puts the items in another array but you may prefer to process them directly :).
Of course, that's just one of the many ways of achieving the same goal. As I see it, it is simpler than implementing complete sorting algorithm in bash and in some cases it will be faster. It handles all special characters including newlines and should work on most of the common systems. Most importantly, it may teach you something new and awesome about bash :).
Keep it simple ;)
In the following example, the array b is the sorted version of the array a!
The second line echos each item of the array a, then pipes them to the sort command, and the output is used to initiate the array b.
a=(2 3 1)
b=( $( for x in ${a[#]}; do echo $x; done | sort ) )
echo ${b[#]} # output: 1 2 3
min sort:
#!/bin/bash
array=(.....)
index_of_element1=0
while (( ${index_of_element1} < ${#array[#]} )); do
element_1="${array[${index_of_element1}]}"
index_of_element2=$((index_of_element1 + 1))
index_of_min=${index_of_element1}
min_element="${element_1}"
for element_2 in "${array[#]:$((index_of_element1 + 1))}"; do
min_element="`printf "%s\n%s" "${min_element}" "${element_2}" | sort | head -n+1`"
if [[ "${min_element}" == "${element_2}" ]]; then
index_of_min=${index_of_element2}
fi
let index_of_element2++
done
array[${index_of_element1}]="${min_element}"
array[${index_of_min}]="${element_1}"
let index_of_element1++
done
try this:
echo ${array[#]} | awk 'BEGIN{RS=" ";} {print $1}' | sort
Output will be:
3
5
a
b
c
f
Problem solved.
If you can compute a unique integer for each element in the array, like this:
tab='0123456789abcdefghijklmnopqrstuvwxyz'
# build the reversed ordinal map
for ((i = 0; i < ${#tab}; i++)); do
declare -g ord_${tab:i:1}=$i
done
function sexy_int() {
local sum=0
local i ch ref
for ((i = 0; i < ${#1}; i++)); do
ch="${1:i:1}"
ref="ord_$ch"
(( sum += ${!ref} ))
done
return $sum
}
sexy_int hello
echo "hello -> $?"
sexy_int world
echo "world -> $?"
then, you can use these integers as array indexes, because Bash always use sparse array, so no need to worry about unused indexes:
array=(a c b f 3 5)
for el in "${array[#]}"; do
sexy_int "$el"
sorted[$?]="$el"
done
echo "${sorted[#]}"
Pros. Fast.
Cons. Duplicated elements are merged, and it can be impossible to map contents to 32-bit unique integers.
array=(a c b f 3 5)
new_array=($(echo "${array[#]}" | sed 's/ /\n/g' | sort))
echo ${new_array[#]}
echo contents of new_array will be:
3 5 a b c f
There is a workaround for the usual problem of spaces and newlines:
Use a character that is not in the original array (like $'\1' or $'\4' or similar).
This function gets the job done:
# Sort an Array may have spaces or newlines with a workaround (wa=$'\4')
sortarray(){ local wa=$'\4' IFS=''
if [[ $* =~ [$wa] ]]; then
echo "$0: error: array contains the workaround char" >&2
exit 1
fi
set -f; local IFS=$'\n' x nl=$'\n'
set -- $(printf '%s\n' "${#//$nl/$wa}" | sort -n)
for x
do sorted+=("${x//$wa/$nl}")
done
}
This will sort the array:
$ array=( a b 'c d' $'e\nf' $'g\1h')
$ sortarray "${array[#]}"
$ printf '<%s>\n' "${sorted[#]}"
<a>
<b>
<c d>
<e
f>
<gh>
This will complain that the source array contains the workaround character:
$ array=( a b 'c d' $'e\nf' $'g\4h')
$ sortarray "${array[#]}"
./script: error: array contains the workaround char
description
We set two local variables wa (workaround char) and a null IFS
Then (with ifs null) we test that the whole array $*.
Does not contain any woraround char [[ $* =~ [$wa] ]].
If it does, raise a message and signal an error: exit 1
Avoid filename expansions: set -f
Set a new value of IFS (IFS=$'\n') a loop variable x and a newline var (nl=$'\n').
We print all values of the arguments received (the input array $#).
but we replace any new line by the workaround char "${#//$nl/$wa}".
send those values to be sorted sort -n.
and place back all the sorted values in the positional arguments set --.
Then we assign each argument one by one (to preserve newlines).
in a loop for x
to a new array: sorted+=(…)
inside quotes to preserve any existing newline.
restoring the workaround to a newline "${x//$wa/$nl}".
done
This question looks closely related. And BTW, here's a mergesort in Bash (without external processes):
mergesort() {
local -n -r input_reference="$1"
local -n output_reference="$2"
local -r -i size="${#input_reference[#]}"
local merge previous
local -a -i runs indices
local -i index previous_idx merged_idx \
run_a_idx run_a_stop \
run_b_idx run_b_stop
output_reference=("${input_reference[#]}")
if ((size == 0)); then return; fi
previous="${output_reference[0]}"
runs=(0)
for ((index = 0;;)) do
for ((++index;; ++index)); do
if ((index >= size)); then break 2; fi
if [[ "${output_reference[index]}" < "$previous" ]]; then break; fi
previous="${output_reference[index]}"
done
previous="${output_reference[index]}"
runs+=(index)
done
runs+=(size)
while (("${#runs[#]}" > 2)); do
indices=("${!runs[#]}")
merge=("${output_reference[#]}")
for ((index = 0; index < "${#indices[#]}" - 2; index += 2)); do
merged_idx=runs[indices[index]]
run_a_idx=merged_idx
previous_idx=indices[$((index + 1))]
run_a_stop=runs[previous_idx]
run_b_idx=runs[previous_idx]
run_b_stop=runs[indices[$((index + 2))]]
unset runs[previous_idx]
while ((run_a_idx < run_a_stop && run_b_idx < run_b_stop)); do
if [[ "${merge[run_a_idx]}" < "${merge[run_b_idx]}" ]]; then
output_reference[merged_idx++]="${merge[run_a_idx++]}"
else
output_reference[merged_idx++]="${merge[run_b_idx++]}"
fi
done
while ((run_a_idx < run_a_stop)); do
output_reference[merged_idx++]="${merge[run_a_idx++]}"
done
while ((run_b_idx < run_b_stop)); do
output_reference[merged_idx++]="${merge[run_b_idx++]}"
done
done
done
}
declare -ar input=({z..a}{z..a})
declare -a output
mergesort input output
echo "${input[#]}"
echo "${output[#]}"
Many thanks to the people that answered before me. Using their excellent input, bash documentation and ideas from other treads, this is what works perfectly for me without IFS change
array=("a \n c" b f "3 5")
Using process substitution and read array in bash > v4.4 WITH EOL character
readarray -t sorted < <(sort < <(printf '%s\n' "${array[#]}"))
Using process substitution and read array in bash > v4.4 WITH NULL character
readarray -td '' sorted < <(sort -z < <(printf '%s\0' "${array[#]}"))
Finally we verify with
printf "[%s]\n" "${sorted[#]}"
output is
[3 5]
[a \n c]
[b]
[f]
Please, let me know if that is a correct test for embedded /n as both solutions produce the same result, but the first one is not supposed to work properly with embedded /n
I am not convinced that you'll need an external sorting program in Bash.
Here is my implementation for the simple bubble-sort algorithm.
function bubble_sort()
{ #
# Sorts all positional arguments and echoes them back.
#
# Bubble sorting lets the heaviest (longest) element sink to the bottom.
#
local array=($#) max=$(($# - 1))
while ((max > 0))
do
local i=0
while ((i < max))
do
if [ ${array[$i]} \> ${array[$((i + 1))]} ]
then
local t=${array[$i]}
array[$i]=${array[$((i + 1))]}
array[$((i + 1))]=$t
fi
((i += 1))
done
((max -= 1))
done
echo ${array[#]}
}
array=(a c b f 3 5)
echo " input: ${array[#]}"
echo "output: $(bubble_sort ${array[#]})"
This shall print:
input: a c b f 3 5
output: 3 5 a b c f
a=(e b 'c d')
shuf -e "${a[#]}" | sort >/tmp/f
mapfile -t g </tmp/f
Great answers here. Learned a lot. After reading them all, I figure I'd throw my hat into the ring. I think this is the shortest method (and probably faster as it doesn't do much shell script parsing, though there is the matter of the spawning of printf and sort, but they're only called once each) and handles whitespace in the data:
a=(3 "2 a" 1) # Setup!
IFS=$'\n' b=( $(printf "%s\n" "${a[#]}" | sort) ); unset IFS # Sort!
printf "'%s' " "${b[#]}"; # Success!
Outputs:
'1' '2 a' '3'
Note that the IFS change is limited in scope to the line it is on. if you know that the array has no whitespace in it, you don't need the IFS modification.
Inspiration was from #yas's answer and #Alcamtar comments.
EDIT
Oh, I somehow missed the actually accepted answer which is even shorter than mine. Doh!
IFS=$'\n' sorted=($(sort <<<"${array[*]}")); unset IFS
Turns out that the unset is required because this is a variable assignment that has no command.
I'd recommend going to that answer because it has some interesting stuff on globbing which could be relevant if the array has wildcards in it. It also has a detailed description as to what is happening.
EDIT 2
GNU has an extension in which sort delimits records using \0 which is good if you have LFs in your data. However, when it gets returned to the shell to be assign to an array, I don't see a good way convert it so that the shell will delimit on \0, because even setting IFS=$'\0', the shell doesn't like it and doesn't properly break it up.
array=(z 'b c'); { set "${array[#]}"; printf '%s\n' "$#"; } \
| sort \
| mapfile -t array; declare -p array
declare -a array=([0]="b c" [1]="z")
Open an inline function {...} to get a fresh set of positional arguments (e.g. $1, $2, etc).
Copy the array to the positional arguments. (e.g. set "${array[#]}" will copy the nth array argument to the nth positional argument. Note the quotes preserve whitespace that may be contained in an array element).
Print each positional argument (e.g. printf '%s\n' "$#" will print each positional argument on its own line. Again, note the quotes preserve whitespace that may be contained in each positional argument).
Then sort does its thing.
Read the stream into an array with mapfile (e.g. mapfile -t array reads each line into the variable array and the -t ignores the \n in each line).
Dump the array to show its been sorted.
As a function:
set +m
shopt -s lastpipe
sort_array() {
declare -n ref=$1
set "${ref[#]}"
printf '%s\n' "$#"
| sort \
| mapfile -t $ref
}
then
array=(z y x); sort_array array; declare -p array
declare -a array=([0]="x" [1]="y" [2]="z")
I look forward to being ripped apart by all the UNIX gurus! :)
sorted=($(echo ${array[#]} | tr " " "\n" | sort))
In the spirit of bash / linux, I would pipe the best command-line tool for each step. sort does the main job but needs input separated by newline instead of space, so the very simple pipeline above simply does:
Echo array content --> replace space by newline --> sort
$() is to echo the result
($()) is to put the "echoed result" in an array
Note: as #sorontar mentioned in a comment to a different question:
The sorted=($(...)) part is using the "split and glob" operator. You should turn glob off: set -f or set -o noglob or shopt -op noglob or an element of the array like * will be expanded to a list of files.