Bash Script Not Adding to arrays - arrays

So for some reason my script isnt adding the extra data to the arrays inside the while loops. The data is there and correctly matching but not appending to the arrays.
The input file contents is:
name:passwordhash
while read -r lines;
do
HASH_COMMON=$(echo -n "$lines" | sha256sum | awk '{print $1}');
while read -r line ;
do
# change seperator to colon
IFS=: read -r NAME PASSWORD <<< "$line"
# compare hashed password against hashed password list
if [ "$HASH_COMMON" == "$PASSWORD" ] ;
then
CRACKED_RESULT+=("$lines")
HASH_RESULT+=("$HASH_COMMON")
NAME_RESULT+=("$NAME")
PASSWORD_RESULT+=("$PASSWORD")
fi
done < "$PASSED_FILE"
done < "$COMMON_PW"
output if i print the arrays outside the loop is just a single pass of data using for loop
for i in "${NAME_RESULT[#]}"
do
echo "$i"
echo ""
done
then I am comparing the arrays later with a for loop as well
for isCracked in "${CRACKED_RESULT[#]}";
do
for checkCrack in "${PASSWORD_RESULT[#]}";
do
one="${HASH_RESULT[isCracked]}"
two="${PASSWORD_RESULT[checkCracked]}"
if [ "$one" == "$two" ];
then
RESULT_ARRAY+=("USERNAME: ${NAME_RESULT[checkCracked]} PASSWORD: ${CRACKED_RESULT[checkCracked]}")
fi
checkCrack=+1
done
isCracked=+1
done

Related

bash: reading list of values from variable in file [duplicate]

This question already has answers here:
How do I split a string on a delimiter in Bash?
(37 answers)
Closed last year.
STATUS QUO
i have an external properties file, where a couple of variables are stored. one of these has a list of values (min 1 value, max X values).
when declaring this single list within the shell script, it would look like this:
NODES=(
"node"
"node-2"
"node-3"
)
I read the values from the properties file like this:
# checks, if file exists and is readable
file="./properties.credo"
if [ ! -r "$file" ]
then
echo "fatal: properties file $file not found / readable."
exit 2
fi
# loads properties into variables
. $file
[[ -z "$VALUEA" ]] && echo "fatal: VALUEA not specified in .credo" && exit 2
...
PROBLEM
When defining the NODES values in the properties like this:
NODES=node,node-2,node-3
... and reading with that:
...
[[ -z "$NODES" ]] && echo "fatal: NODES not specified in .credo" && exit 2
...
... it will be read from the file as a single string node,node-2,node-3, but not as a list or one-dimensional array.
Beware! This is dangerous as the config file can contain PATH=/some/path and the script can then execute commands over which you have no control.
You can use read -a to populate an array. Set IFS to the separator and send the values read from the file to read's standard input.
#! /bin/bash
file=$1
IFS== read var values < "$file"
IFS=, read -a $var <<< "$values"
echo "${NODES[#]}"
For a multiline config, I tried the following:
nodes=node1,node2,node3
servers=server1,server2
and modified the script to loop over the input:
#! /bin/bash
file=$1
while IFS== read var values ; do
IFS=, read -a $var <<< "$values"
done < "$file"
echo "${nodes[#]}"
echo "${servers[#]}"
You might need to skip over lines that don't follow the var=val1,val2,... pattern.

Split two numbers in two arrays

I need to split 2 numbers in the form(they are from a text file):
Num1:Num2
Num3:Num4
And store num1 into array X and number 2 in array Y num 3 in array X and num4 in array Y.
With bash:
mapfile -t X < <(cut -d : -f 1 file) # read only first column
mapfile -t Y < <(cut -d : -f 2 file) # read only second column
declare -p X Y
Output:
declare -a X='([0]="num1" [1]="num3")'
declare -a Y='([0]="num2" [1]="num4")'
Disadvantage: The file is read twice.
You could perform the following steps:
Create destination arrays empty
Read file line by line, with a classic while read ... < file loop
Split each line on :, again using read
Append values to arrays
For example:
arr_x=()
arr_y=()
while IFS= read line || [ -n "$line" ]; do
IFS=: read x y <<< "$line"
arr_x+=("$x")
arr_y+=("$y")
done < data.txt
echo "content of arr_x:"
for v in "${arr_x[#]}"; do
echo "$v"
done
echo "content of arr_y:"
for v in "${arr_y[#]}"; do
echo "$v"
done
Here is a quick bash solution:
c=0
while IFS=: read a b ;do
x[$c]="$a"
y[$c]="$b"
c=$((c+1))
done < input.txt
We send the input.txt to a while loop, using Input Field Separator : and read the first number of each line as $a and second number as $b. Then we add them to the array as you specified. We use a counter $c to iterate the location in the arrays.
Using =~ operator to store the pair of numbers to array $BASH_REMATCH:
$ cat file
123:456
789:012
$ while read -r line
do
[[ $line =~ ([^:]*):(.*) ]] && echo ${BASH_REMATCH[1]} ${BASH_REMATCH[2]}
# do something else with numbers as they will be replaced on the next iteration
done < file

Fetching data into an array

I have a file like this below:
-bash-4.2$ cat a1.txt
0 10.95.187.87 5444 up 0.333333 primary 0 false 0
1 10.95.187.88 5444 up 0.333333 standby 1 true 0
2 10.95.187.89 5444 up 0.333333 standby 0 false 0
I want to fetch the data from the above file into a 2D array.
Can you please help me with a suitable way to put into an array.
Also post putting we need put a condition to check whether the value in the 4th column is UP or DOWN. If it's UP then OK, if its down then below command needs to be executed.
-bash-4.2$ pcp_attach_node -w -U pcpuser -h localhost -p 9898 0
(The value at the end is getting fetched from the 1st column.
You could try something like that:
while read -r line; do
declare -a array=( $line ) # use IFS
echo "${array[0]}"
echo "${array[1]}" # and so on
if [[ "$array[3]" ]]; then
echo execute command...
fi
done < a1.txt
Or:
while read -r -a array; do
if [[ "$array[3]" ]]; then
echo execute command...
fi
done < a1.txt
This works only if field are space separated (any kind of space).
You could probably mix that with regexp if you need more precise control of the format.
Firstly, I don't think you can have 2D arrays in bash. But you can however store lines into a 1-D array.
Here is a script ,parse1a.sh, to demonstrate emulation of 2D arrays for the type of data you included:
#!/bin/bash
function get_element () {
line=${ARRAY[$1]}
echo $line | awk "{print \$$(($2+1))}" #+1 since awk is one-based
}
function set_element () {
line=${ARRAY[$1]}
declare -a SUBARRAY=($line)
SUBARRAY[$(($2))]=$3
ARRAY[$1]="${SUBARRAY[#]}"
}
ARRAY=()
while IFS='' read -r line || [[ -n "$line" ]]; do
#echo $line
ARRAY+=("$line")
done < "$1"
echo "Full array contents printout:"
printf "%s\n" "${ARRAY[#]}" # Full array contents printout.
for line in "${ARRAY[#]}"; do
#echo $line
if [ "$(echo $line | awk '{print $4}')" == "down" ]; then
echo "Replace this with what to do for down"
else
echo "...and any action for up - if required"
fi
done
echo "Element access of [2,3]:"
echo "get_element 2 3 : "
get_element 2 3
echo "set_element 2 3 left: "
set_element 2 3 left
echo "get_element 2 3 : "
get_element 2 3
echo "Full array contents printout:"
printf "%s\n" "${ARRAY[#]}" # Full array contents printout.
It can be executed by:
./parsea1 a1.txt
Hope this is close to what you are looking for. Note that this code will loose all indenting spaces during manipulation, but a formatted update of the lines could solve that.

Problems with traversing Array in csv file in bash script

So what I'm trying to do in my code is basically read in a spreadsheet that has this format
username, lastname, firstname, x1, x2, x3, x4
user1, dudette, mary, 7, 2, 4
user2, dude, john, 6, 2, 4,
user3, dudest, rad,
user4, dudaa, pad, 3, 3, 5, 9
basically, it has usernames, the names those usernames correspond to, and values for each x. What I want to do is read in this from a csv file and then find all of the blank spaces and fill them in with 5s. My approach to doing this was to read in the whole array and then substitute all null spaces with 0s. This is the code so far...
#!/bin/bash
while IFS=$'\t' read -r -a myarray
do
echo $myarray
done < something.csv
for e in ${myarray[#]
do
echo 'Can you see me #1?'
if [[-z $e]]
echo 'Can you see me #2?'
sed 's//0'
fi
done
The code isn't really changing my csv file at all. EDITED NOTE: the data is all comma separated.
What I've figured out so far:
Okay, the 'Can you see me' and the echo myarray are test code. I wanted to see if the whole csv file was being read in from echo myarray (which according to the output of the code seems to be the case). It doesn't seem, however, that the code is running through the for loop at all...which I can't seem to understand.
Help is much appreciated! :)
The format of your .csv file is not comma separated, it's left aligned with a non-constant number of whitespace characters separating each field. This makes it difficult to be accurate when trying to find and replace empty columns which are followed by non-empty columns.
Here is a Bash only solution that would be entirely accurate if the fields were comma separated.
#!/bin/bash
n=5
while IFS=, read username lastname firstname x1 x2 x3 x4; do
! [[ $x1 ]] && x1=$n
! [[ $x2 ]] && x2=$n
! [[ $x3 ]] && x3=$n
! [[ $x4 ]] && x4=$n
echo $username,$lastname,$firstname,$x1,$x2,$x3,$x4
done < something.csv > newfile.csv && mv newfile.csv something.csv
Output:
username,lastname,firstname,x1,x2,x3,x4
user1,dudette,mary,7,2,5,4
user2,dude,john,6,2,4,5
user3,dudest,rad,5,5,5,5
user4,dudaa,pad,3,3,5,9
I realize you asked for bash, but if you don't mind perl in lieu of bash, perl is a great tool for record-oriented files.
#!/usr/bin/perl
open (FILE, 'something.csv');
open (OUTFILE, '>outdata.txt');
while(<FILE>) {
chomp;
($username,$lastname,$firstname,$x1,$x2,$x3,$x4) = split("\t");
$x1 = 5 if $x1 eq "";
$x2 = 5 if $x2 eq "";
$x3 = 5 if $x3 eq "";
$x4 = 5 if $x4 eq "";
print OUTFILE "$username\t$lastname\t$x1\t$x2\t$x3\t$x4\n";
}
close (FILE);
close (OUTFILE);
exit;
This reads your infile, something.csv which is assumed to have tab-separated fields, and writes a new file outdata.txt with the re-written records.
I'm sure there's a better or more idiomatic solution, but this works:
#!/bin/bash
infile=bashcsv.csv # Input filename
declare -i i # Iteration variable
declare -i defval=5 # Default value for missing cells
declare -i n_cells=7 # Total number of cells per line
declare -i i_start=3 # Starting index for numeric cells
declare -a cells # Array variable for cells
# We'd usually save/restore the old value of IFS, but there's no need here:
IFS=','
# Convenience function to bail/bug out on error:
bail () {
echo $# >&2
exit 1
}
# Strip whitespace and replace empty cells with `$defval`:
sed -s 's/[[:space:]]//g' $infile | while read -a cells; do
# Skip empty/malformed lines:
if [ ${#cells[*]} -lt $i_start ]; then
continue
fi
# If there are fewer cells than $n_cells, pad to $n_cells
# with $defval; if there are more, bail:
if [ ${#cells[*]} -lt $n_cells ]; then
for ((i=${#cells[*]}; $i<$n_cells; i++)); do
cells[$i]=$defval
done
elif [ ${#cells[*]} -gt $n_cells ]; then
bail "Too many cells."
fi
# Replace empty cells with default value:
for ((i=$i_start; $i<$n_cells; i++)); do
if [ -z "${cells[$i]}" ]; then
cells[$i]=$defval
fi
done
# Print out whole line, interpolating commas back in:
echo "${cells[*]}"
done
Here's a gratuitous awk one-liner that gets the job done:
awk -F'[[:space:]]*,[[:space:]]*' 'BEGIN{OFS=","} /,/ {NF=7; for(i=4;i<=7;i++) if($i=="") $i=5; print}' infile.csv

Is there a way to search an entire array inside of an argument?

Posted my code below, wondering if I can search one array for a match... or if theres a way I can search a unix file inside of an argument.
#!/bin/bash
# store words in file
cat $1 | ispell -l > file
# move words in file into array
array=($(< file))
# remove temp file
rm file
# move already checked words into array
checked=($(< .spelled))
# print out words & ask for corrections
for ((i=0; i<${#array[#]}; i++ ))
do
if [[ ! ${array[i]} = ${checked[#]} ]]; then
read -p "' ${array[i]} ' is mispelled. Press "Enter" to keep
this spelling, or type a correction here: " input
if [[ ! $input = "" ]]; then
correction[i]=$input
else
echo ${array[i]} >> .spelled
fi
fi
done
echo "MISPELLED: CORRECTIONS:"
for ((i=0; i<${#correction[#]}; i++ ))
do
echo ${array[i]} ${correction[i]}
done
otherwise, i would need to write a for loop to check each array indice, and then somehow make a decision statement whether to go through the loop and print/take input
The ususal shell incantation to do this is:
cat $1 | ispell -l |while read -r ln
do
read -p "$ln is misspelled. Enter correction" corrected
if [ ! x$corrected = x ] ; then
ln=$corrected
fi
echo $ln
done >correctedwords.txt
The while;do;done is kind of like a function and you can pipe data into and out of it.
P.S. I didn't test the above code so there may be syntax errors

Resources