Saving part of each line to an array in bash - arrays

I am trying to read from a file that is storing user names and addresses in the format of name:address on each new line and I wish to store only the addresses into an array. Is there any way to do this? My code currently looks like this:
while IFS=: read -r username address; do
array=${address}
done <userfile.txt
Which is only storing the address from the first line in the file and stopping.

You are almost right! You just need to append to the array using the += operator (append) which bash arrays provide.
declare -a myArray=()
while IFS=: read -r username address; do
myArray+=("$address")
done < userfile.txt
Doing the above should do the trick for you. Note that the parentheses are also critical here. array+=(something) appends a new element to the array, while array+=something just appends text to the first element of the array. Optionally later to print the array contents each in separate line use printf as
printf "%s\n" "${myArray[#]}"

You can use array+=($address) form of adding array element.
array=()
while IFS=: read -r username address; do
array+=("$address")
done < userfile.txt
echo ${array[#]}

Related

Bash echo of array element not showing up [duplicate]

This question already has answers here:
Bash: Split string into character array
(20 answers)
Closed 6 months ago.
I am reading a file that basically contains a list of binary data (fixed at 8 bits wide per line):
01011100
11110001
...
For each line that is read, I need to "remap" the bits in chunks of 4 bits to specific positions. So for example in the 1st line above, bits 1100 and 0101 will each be remapped that will follow this formula: bit 0 goes to bit 3 position, bit 1 to 2, bit 3 to 1, and lastly bit 2 to 0.
To do this, I coded as follows:
function remap {
echo "Remapper";
IFS=
read -ra din <<< $1;
echo ${#din};
echo ${din[1]};
## above line is just displaying blank as seen in below result
echo ${din[*]};
## do actual remapping here
};
for line in `cat $infile`;
do
data0=${line:0:4};
data1=${line:4:4};
echo "Read line";
echo $data0;
echo $data1;
remap $data0;
remap $data1;
done
I don't know why I'm not seeing the echoed array element. This is the output from the 1st read line:
Read line
0101
1100
Remapper
4
0101
Remapper
4
1100
I haven't gotten to coding the actual remapping itself because I couldn't even verify that I'm able to properly split the $1 variable of remap() function into the din array.
Thank you in advance for the help!
Unlike other languages, setting IFS to empty string does not split a string
into each character array. Instead you can use fold -w1 to
add a newline after each character:
remap() {
echo "Remapper"
mapfile -t din < <(fold -w1 <<< "$1")
echo "${din[3]}${din[2]}${din[0]}${din[1]}"
}
As it will be inefficient to invoke fold command every time, it
may be better to say:
remap() {
echo "Remapper"
echo "${1:3:1}${1:2:1}${1:0:1}${1:1:1}"
}
As a side note, you don't need to append semicolon after each command.
There are a number of confusions here. The biggest is that read -ra din is not splitting the string into characters. read will split its input into words delimited by the characters in IFS; normally that's whitespace, but since you set IFS to the empty string, there are no delimiters and the string won't be split at all. Anyway, you don't want to split it based on delimiters, you want to split it into characters, so read is the wrong tool.
Another source of confusion is that ${#din} isn't the length of the array, it's the length (in characters) of the first element of the array. ${#din[#]} would get the number of elements in the array, and in this case it'd be 1. More generally, declare -p din would be a better way to see what din is and what it contains; here, it'd print something like declare -a din='([0]="0101")', showing that it's an array with a single four-character element, numbered 0.
What I'd do here is skip trying to split the characters into array elements entirely, and just index them as characters in $1 -- that is, ${1:0:1} will get the first character (character #0) from $1, ${1:1:1} will get the second (#1), etc. So to print the bits in the order third, first, second, fourth, you'd use:
echo "${1:2:1}${1:0:1}${1:1:1}${1:3:1}"
Other recommendations: It's best to double-quote variable expansions (like I did above) to prevent weird parsing problems. for var in $(cat somefile) is a fragile way to read lines from a file; while read var; do ... done <somefile is generally better. I'd recommend remap() { instead of the nonstandard function remap { syntax, and semicolons are redundant at the end of lines (well... with a few weird exceptions). shellcheck.net will point most of these out, and is a good tool to sanity-check your scripts for common mistakes.

Why does my bash script not work in my C program?

My program takes an arbitrary number of words from the user and stores them in a double pointer **stringArr. These values are then concatenated into a string which is then passed into a bash script I have.
The problem I have is that the bash script doesn't echo the command I want it to, and I am unsure why.
string = malloc(N * sizeof(char));
for (int j = 0; j < N; j++) {
strcat(string, stringArr[j]);
strcat(string, " ");
}
puts("\n\nYour input sorted in alphabetical order:");
if (fork() == 0) {
execl("./sortString.sh", "sortString.sh", string, NULL);
}
#!/bin/bash
for NAME in "$#"
do
VAR=$VAR" "$NAME
done
echo $VAR | tr ’ ’ ’\n’ | sort | tr ’\n’ ’ ’
Is there something I am missing?
Note: the bash script is in the same directory as the program; the program works in regards to taking user input and it putting into the string string.
If you want to try out the bash script, an example of string I have passed through is: "one two three " (there is a space after 'three').
You cannot use the exec() family of functions to execute a shell script directly; they require the name of a proper executable. If you check the return value of your execl() call, I'm sure you'll see an error (-1), and errno will probably have been set to ENOEXEC.
You could try using system() instead, but it is generally frowned upon because you'd need to build a full (single) command string, and making sure that everything is properly escaped and such is error-prone.
Instead, I'd recommend that you give "/bin/sh" or "/bin/bash" as the first argument to exec(). Then, the args to sh would need to be the path of your script, and then the args that your script will use.
(this is what your shell does automatically when you run a script; it reads the #! line, and executes "/bin/bash your-script your-args...")
Your string allocation is too small. You're allocating space for N chars, but you're strcating N spaces into it, plus whatever is in your stringArr (which I assume is not full of empty strings).
Even then, you will have just one big string of args, but exec() wants them separated. Think of it like if you point quotes around all the args in bash; you get just one big argument, which contains spaces.
After you fix that (so that you have an array of strings rather than just one big one), you will run into problems with execl(). It takes the arguments separately, but you're trying to send one big array of them. You'll want execv() instead, which accepts an array of strings for the argument list. Remember that the first string in that list must be your script path.
Are you sure this wouldn't be easier to do with qsort()?

Read array from file separated with commas and newlines

I have a file that has two different words per line, delimited by a comma and a line break. How can you read this file and store every word in an array? My code doesn't work because I think only works for "one line" array.
File Sample:
Each word is separated by a comma and a line break.
Dog,cat
shark,rabbit
mouse,bird
whale,dolphin
Desired input
"${array[0]}" = Dog
"${array[1]}" = cat
"${array[2]}" = shark
"${array[3]}" = rabbit
"${array[4]}" = mouse
"${array[5]}" = bird
"${array[6]}" = whale
"${array[7]}" = dolphin
My Code:
input=$(cat "/path/source_file")
IFS=',' read -r -a array <<< "$input"
IFS=$'\n,' read -d '' -ra array < file
The key is to use IFS to tell read to split the entire input) -d '') into array elements (-a; -r ensures unmodified reading) by both \n and , characters.
For simplicity, I've used file to represent your input file and used it directly as input to read via stdin (<).
If you do have a need to read the entire file into a shell variable first, the following form is slightly more efficient in Bash (but is not POSIX-compliant):
input=$(< "/path/source_file")
Input Format:
Read the inarr1 from line 1 with array elements separated by (,) comma.
Read the inarr2 from line 2 with array elements separated by (,) comma.
Read the input from the standard input stream
Output format:

Bash: replace text in nth line of file with nth index of array

Trying to replace MAC Addresses in a flat file. In the code below, the addresses are successfully mapping to the array. I've tried to use a counter to increment the array index on each loop, with the intent of replacing the address on line n with the nth address in the array.
The sed block effectively replaces the addresses, but only with the entry at array index 0.
mapfile -t Arr1 < <(text processing commands)
i=0
while read line
do
sed -E "s/([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}/${Arr1[$i]}/"
((i++))
done < $macFile
The problem is that sed is reading from standard input, so instead of reading the contents of the $line variable, it's reading the contents of the file designated by $macFile (except the first line, which has already been grabbed by read).
To fix this, add <<< "$line" to the end of your sed command.

How to read and split comma separated file in a bash shell script?

I want to read a file line by line, split each line by comma (,) and store the result in an array. How to do this in a bash shell script?
Sample line in my comma separated file
123,2014-07-21 10:01:44,123|8119|769.00||456|S
This should be the output after splitting:
arr[0]=123 arr[1]=2014-07-21 10:01:44 arr[2]=123|8119|769.00||456|S
Use read -a to split each line read into array based from IFS.
while IFS=, read -ra arr; do
## Do something with ${arr0]}, ${arr[1]} and ${arr[2]}
...
done < file
If the third field can also contain commas, you can prevent it from being split by using finite non-array parameters:
while IFS=, read -r a b c; do
## Do something with $a, $b and $c
...
done < file
From help read:
Reads a single line from the standard input, or from file descriptor FD
if the -u option is supplied. The line is split into fields as with word
splitting, and the first word is assigned to the first NAME, the second
word to the second NAME, and so on, with any leftover words assigned to
the last NAME. Only the characters found in $IFS are recognized as word
delimiters.
-a array assign the words read to sequential indices of the array
variable ARRAY, starting at zero

Resources