Bash ? appears when array is created from readarray but not manually declared - arrays

I have a file (my_ID_file) of IDs, one ID per line, no other excess white white spaces. it was created from using the cut command form another file. the file looks like 101 lines of this...
PA10
PA102
PA103
PA105
PA107
PA109
I am trying to use these IDs in a for loop to create a directory structure. so I use the readarray function as such to create the array...
readarray TIDs < my_ID_file
and then use a for loop as such to create the directory structure...
for T in "${TIDs[#]}"
do
mkdir "$T"_folder
done
this produces directories named....
PA10?_folder
PA102?_folder
PA103?_folder
PA105?_folder
PA107?_folder
PA109?_folder
If however I declare the array manually like....
TIDs=(PA10 PA102 PA103 PA105 PA107 PA109)
and then run the for loop I get the correct directory structure produced like....
PA10_folder
PA102_folder
PA103_folder
PA105_folder
PA107_folder
PA109_folder
Where are these question marks coming from? how can i declare arrays from files like this without having this question mark appearing in subsequent use of the array?
Thanks

Your text file has DOS-style line endings. The "?" appear because the carriage returns would mess up ls output. Try od -c my_ID_file
Solution: dos2unix my_ID_file
take #2: readarray does not remove the line's newline by default. You really want
readarray -t TIDs < my_ID_file
reference: http://www.gnu.org/software/bash/manual/bashref.html#index-mapfile

Related

Add text on certain lines of a file, with the added text depending on the output of a command that takes a substring from the line

I'm trying to make a shell script to take an input file (thousands of lines) and produce an output file that is the same except that on certain lines there will be added text. When the text is added to the (middle of) the line, the exact added text will depend on a substring on the line. The correlation between the substring and the added text is complex and comes from an external program that I can call in the shell. I don't have the source for this converter program nor any control over how the mapping is done.
To explain further...
I have an input file of this general format:
Blah Blah Unimportant
Something Something
FIELD_INFO(field_name_1, output_1),
FIELD_INFO(field_name_2, output_2),
Yadda Yadda
The whole file needs to be copied, with added text, but the only important parts for me are the field names (e.g. field_name_1, field_name_2). I have a command line program called "converter" that can take a file of field names and output a list of corresponding actions. Converter cannot operate directly on the input file. The input to converter needs to be just field names and the output of converter has extra information I don't need:
converter_field_name_1 "action1" /* Use this action for field_name_1 */
converter_field_name_2 "action2" /* use this action for field_name_2 */
The desire is to create a second file that looks like this:
Blah Blah Unimportant
Something Something
FIELD_INFO(field_name_1, action1, output_1),
FIELD_INFO(filed_name_2, action2, output_2),
Yadda Yadda
Here is the script I'm working on, but I've hit a wall (or two):
#!/bin/bash
filename="input_file"
# Let's create an array of the field names to feed to the converter program
field_array=($(sed -e '/^\s*FIELD_INFO/ s/FIELD_INFO(\(.*\),.*),/\1/' -e 't' -e 'd' < ${filename}))
# Save the array to a file, to be able to use the converter's file option
printf "%s\n" "${field_array[#]}" > script_field_names.txt
# Use converter on the whole file and extract only the actions into another array
action_array=($(converter -f script_field_names.txt | cut -d'"' -f 2))
# I will make and use an associative array and try to use
# sed to do the substitution
declare -A mapper
for i in ${!field_array[*]}
do
mapper[${field_array[i]}]=${action_array[i]}
done
#Now go back through the file and add action names (source file unchanged)
sed -e "s/FIELD_INFO(\(.*\),\(.*?),\)/FIELD_INFO(\1, ${mapper[\1], \2}/" < ${filename}
I know now that I can't use the sed group capture "\1" as an index into the mapper array like this. It is not working as a key and the output looks like this:
Blah Blah Unimportant
Something Something
FIELD_INFO(field_name_1, , output_1),
FIELD_INFO(field_name_2, , output_2),
Yadda Yadda
My actual script has debug statements scattered throughout and I know the field array, action array, and mapper array are all getting created correctly. But my idea of using the group capture substring from sed as the index into the mapper array is not working because I now know that sed expands the variables before running in the sub-shell, so the mapper[] array is not seeing the substring as an index.
What should I be doing instead? This script may only be used once, but it's too time consuming and error prone to do the addition of the action strings by hand. I want to come up with a way to make this work but I can't tell if I'm close or completely on the wrong path.
sed -e "s/FIELD_INFO(\(.*\),\(.*?),\)/FIELD_INFO(\1, ${mapper[\1], \2}/" < ${filename}
[...]
I now know that sed expands the variables before running in the sub-shell, so the mapper[] array is not seeing the substring as an index.
Good job identifying the problem. Also, the non-greedy quantifier .*? does not work with sed and ${mapper[\1], \2} should probably be ${mapper[\1]}, \2.
If you want to keep your current approach I see two options.
Do the replacement line by line in bash, either by creating a giant sed command string that lists the action for each line, or by executing sed inside a loop for each line while creating the command strings on the fly.
Instead of the array mapper, create a file that lists the actions to be inserted in the order from the file. Then use GNU sed's R filename command. This command inserts the next line from filename. You can use this to insert the correct action each time you come across a filed. However, the linebreak is inserted too. So you have to fiddle with the hold space and so on to remove these linebreaks afterwards.
Both options are not that great. Therefore I'd switch to awk to insert the actions:
sed -En 's/^\s*FIELD_INFO\(([^,]*).*/\1/p' "$filename" > fields
converter -f fields | cut -d\" -f2 > actions
awk '/^\s*FIELD_INFO\(/ {getline a < "actions"; sub(",", ", " a ",")} 1' "$filename"
With GNU grep you can simplify the first line to
grep -Po '^\s*FIELD_INFO\(\K[^,]*' "$filename" > fields
Why not try,
sed -n -e 's/^[ ]*FIELD_INFO(\(.*\),.*,/\1/p' -- input_file > script_field_names.txt
printf '/^[ ]*FIELD_INFO(%s,/ s/(\\(.[^,]*\\), \\(.[^)]*\\))/(\\1, %s, \\2)/\n' \
$(converter -f script_field_names.txt | cut -d'"' -f 2 |
paste -- script_field_names.txt -) |
sed -f /dev/stdin -- input_file
where
paste emits the map of fields (from file) and actions (from stdin)
printf emits a script read by sed from stdin
each script line becomes: /^[ ]*FIELD_INFO(fieldnameN,/ s/(\(.[^,]*\), \(.[^)]*\))/(\1, actionN, \2)/

Using array from an output file in bash shell

am in a need of using an array to set the variable value for further manipulations from the output file.
scenario:
> 1. fetch the list from database
> 2. trim the column using sed to a file named x.txt (got specific value as that is required)
> 3. this file x.txt has the below output as
10000
20000
30000
> 4. I need to set a variable and assign the above values to it.
A=10000
B=20000
C=30000
> 5. I can invoke this variable A,B,C for further manipulations.
Please let me know how to define an array assigning to its variable from the output file.
Thanks.
In bash (starting from version 4.x) and you can use the mapfile command:
mapfile -t myArray < file.txt
see https://stackoverflow.com/a/30988704/10622916
or another answer for older bash versions: https://stackoverflow.com/a/46225812/10622916
I am not a big proponent of using arrays in bash (if your code is complex enough to need an array, it's complex enough to need a more robust language), but you can do:
$ unset a
$ unset i
$ declare -a a
$ while read line; do a[$((i++))]="$line"; done < x.txt
(I've left the interactive prompt in place. Remove the leading $
if you want to put this in a script.)

How to store a directory's contents inside of an array

I want to be able to store a directory's contents inside of an array. I know that you can use:
#!/bin/bash
declare -A array
for i in Directory; do
array[$i]=$i
done
to store directory contents in an associative array. But if there are subdirectories inside of a directory, I want to be able to store them and their contents inside of the same array. I also tried using:
declare -A arr1
find Directory -print0 | while read -d $'\0' file; do
arr1[$file]=$file
echo "${arr1[$file]}"
done
but this just runs into the problem where the array contents vanish once the while loop ends due to the subshell being discarded from the pipeline (not sure if I'm describing this correctly).
I even tried the following:
for i in $(find Directory/*); do
arr2[$i]="$i"
echo $i
done
but the output is a total disaster for files containing any spaces.
How can I store both a directory and all of its subdirectories (and their subdirectories if need be) inside of a single array?
So you know, you don't need associative arrays. A simpler way to add an element to a regular indexed array is:
array+=("$value")
Your find|while approach is on the right track. As you've surmised, you need to get rid of the pipeline. You can do that with process substitution:
while read -d $'\0' file; do
arr1+=("$file")
done < <(find Directory -print0)
Another way to do this without find is with globbing. If you just want the files under Directory it's as simple as:
array=(Directory/*)
If you want to recurse through all of its subdirectories as well, you can enable globstar and use **:
shopt -s globstar
array=(Directory/**)
The globbing methods are really nice because they automatically handle file names with whitespace and other special characters.

Can I send a string to grep with commands and file names in a bash script?

Is it possible to send an array variable from the command line,
(where argsGrep="$#" and the command line input is something to the extent of -i Something) to a grep command
e.g.
result=$(grep $argsGrep ./file)
When $argsGrep has only the term to be searched, it works just fine, but the moment it contains more than the text and has a grep command, I can't get it to work whatsoever.
Don't use the intermediate string. It will just break things.
Just expand "$#" at the point you need it.
If you must save the contents of "$#" for some reason then you must use another array.
argsarr=("$#")
result=$(grep "${argsarr[#]}" ./file)

Bash - reading wildcard patterns from a file into an array

I want to read a list of patterns that may (probably do) contain wildcards from a file
The patterns might look like this:
/vobs/of_app/unix/*
/vobs/of_app/bin/*
etc
My first attempt was to do this:
old_IFS=$IFS
IFS=$'\n'
array=($(cat $file))
This worked fine when the patterns did not match anything in the filesystem, but when they did match things in the filesystem, they got expanded so instead of containing the patterns, my array contained the directory listings of the specified directories. This was no good.
I next tried quoting like this
array=("$(cat $file)")
But this dumped the entire contents of the file into the 0th element of the array.
How can I prevent it from expanding the wildcards into directory listings while still putting each line of the file into a separate array element?
Bash 4 introduced readarray:
readarray -t array < "$file"
and you're done.
array=()
while read line; do
array+=("$line")
done < "$file"

Resources