KSH Verifying if a number exist in a list - arrays

I have a list of numbers, around 200 and at the beginning of my ksh I want to verify if parameter 1 is one of this numbers.
I solved this with a big if, but I think that a more elegant solution must exist.
In example, something like this, but in ksh
if $1 in (50, 28, 500, 700, 1, 47) then
do what I want
else
exit
end if
Any idea to start working?
Thanks.
Luis

I found the solution
case $1 in ( 50 | 28 | 500 | 700 | 1 | 47 )
echo ¨Found!¨
;;
*)
echo ¨NOT found!¨
;;
esac
Thanks!

The case statement works for short lists if the list changes or is long, that can get ugly in a hurry. Another idea is to use an associative array. I set up a list of 100 random numbers in the file rand.txt and ran this script to check for numbers on the list:
typeset -A numList
for num in $( < rand.txt )
do
numList[$num]=$num
done
if [[ -n ${numList[$1]} ]]
then
echo "do what I want"
else
echo 'not interesting'
fi
If you don't want a separate file with the numbers, this also works:
typeset -A numList
(
cat <<EOF
72
107
104
82
20
21
EOF
) | while read num
do
numList[$num]=$num
done
if [[ -n ${numList[$1]} ]]
then
echo "do what I want"
else
echo 'not interesting'
fi
These also work on bash.

Related

Unix command to add 2 numbers from 2 separate files and write it to a 3rd file

I have 2 files. I need to add the count of rows of the both and write it to 3rd file.
If the content of 3rd file is >25 , i need to print error.txt and if =<25 , i need to print success.txt
Scenario:
file 1(p.csv) , count: 20
file 2 (m.csv), count : 5
file 3 , count 25
--->should print error.txt
I have tried the below code, but file 3 is not getting generated with expected output.
file_1=$(cat p.csv | wc -l)
echo $file_1
file_2=$(cat m.csv | wc -l)
echo $file_2
file_3 = $(`expr $(file_1) + $(file_2)`)
echo $file_3 > file_3.txt
if [ $file_3 -gt 25 ]; then
touch error.txt
else
touch success.txt
fi
Error message:
20
5
count_test.sh: line 16: file_1: command not found
count_test.sh: line 16: file_2: command not found
expr: syntax error
count_test.sh: line 16: file_3: command not found
count_test.sh: line 20: [: -gt: unary operator expected
Some fixes are required, here is my version:
#!/bin/bash
file_1=$(wc -l p.csv | cut -d' ' -f1)
echo "file_1=$file_1"
file_2=$(wc -l m.csv | cut -d' ' -f1)
echo "file_2=$file_2"
file_3=$(( file_1 + file_2 ))
echo "file_3=$file_3"
if [[ $file_3 -gt 25 ]]
then
echo "ERROR"
touch error.txt
else
echo "Success"
touch success.txt
fi
The arithmetic line was modified to use the $(( )) syntax.
you do not need file_3.txt for the if. If you required it for some other reason, you can put bach the echo "$file_3" > file_3.txt" line.
I added a couple echo statements for debugging purposes.
Some errors you made:
echo $file_1
# Not really wrong, but use quotes
echo "$file_1"
Don't use spaces around = in assignment, don't use backtics and use {} around variables
file_3 = $(`expr $(file_1) + $(file_2)`)
# change this to
file_3=$(expr ${file_1} + ${file_2})
# and consider using (( a = b + c )) for calculations
When you only want the final results, consider
if (( $(cat [pm].csv | wc -l) > 25 )); then
touch error.txt
else
touch success.txt
fi

Read lines looping over array which contains filenames in bash

In bash, I would like to loop over a previously defined array, which contains filenames. In turn, each file of the array must be readed and processed dynamically (while read line...).
This is an example of what the files of the array contains:
_VALUE1_,_VALUE1_,1,Name 1
_VALUE2_,_VALUE2_,1,Name 2
_VALUE3_,_VALUE3_,1,Name 3
_VALUE4_,_VALUE4_,1,Name 4
_VALUE5_,_VALUE5_,1,Name 5
This is what I've tested with no luck.
#!/bin/bash
. functions.sh
GEN_ARQ_ARRAY=("./cfg_file.txt" "./euro_file.txt" "./zl_file.txt")
WB_ARQ_ARRAY=("./rn_cfg_wb_file.txt" "./rn_eur_wb_file.txt" "./rn_zl_wb_file.txt")
BN_ARQ_ARRAY=("./rn_cfg_bn_file.txt" "./rn_eur_bn_file.txt" "./rn_zl_bn_file.txt")
AM_ARQ_ARRAY=("./rn_cfg_am_file.txt" "./rn_eur_am_file.txt" "./rn_zl_am_file.txt")
STATUS_BOOL=true
for i in "${!GEN_ARQ_ARRAY[#]}"; do
while IFS=$'\r' read -r line || [[ -n "$line" ]];do
STACK_NAME=${line%%,*} # Gets the first substring of a string divided by ','
STACK_STATUS=$(curl -su "${USERNAME}":"${PASSWORD}" -X GET http://"${SERVER_NAME}":9100/api/stacks/"${STACK_NAME}"/state | ./jq-linux64 -cr '.result.value')
if [[ $(echo "$STACK_STATUS" | tr -d '\r') == "$STATUS_BOOL" ]]; then
echo "${line}" >> "${GEN_ARQ_ARRAY[i]}"
case ${line} in
*"ARQBS"*|*"ARCBS"*|*"ARQWB"*|*"ARCWB"*) echo "${line}" >> "${WB_ARQ_ARRAY[$i]}";;
*"ARQOF"*|*"ARCOF"*|*"ARQBN"*|*"ARCBN"*) echo "${line}" >> "${BN_ARQ_ARRAY[$i]}";;
*"ARQAM"*|*"ARCAM"*) echo "${line}" >> "${AM_ARQ_ARRAY[$i]}";;
*) echo "$(logWarn) No matches -- ${STACK_NAME}" | tee -a "$LOGFILE";;
esac
else
echo "$(logInfo) ${STACK_NAME} is not running" | tee -a "$LOGFILE"
fi
done < "${GEN_ARQ_ARRAY[i]}"
done
Problem here is that the for loop starts, detects array content, gets the first value of the array, enter into while, and it constantly loops in the first position of the array even with the end of the file is reached. I can't find the way to exit the while loop and continue with the next array position.
I'm pretty sure there is a better way to implement this.
Hearing your ideas!
Edit:
Solved by replacing the line echo "${line}" >> "${GEN_ARQ_ARRAY[i]}", which was in-loop filling up the file.
Solved by replacing the line echo "${line}" >> "${GEN_ARQ_ARRAY[i]}", which was in-loop filling up the file. Once I did, the code worked flawlessly.

Value not getting assigned to array element Bash script

I am working on the below script.
I don't understand why I am unable to assign the value to the array and when I print the array elements I only get arr[0], the rest of elements are empty
Is there a more efficient way to match l_num between fields rt_h and rt_f extracted from rt_facts file?
$ bash -version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
$ cat rt_facts
T5 1 2:8 47:9 44
T14 2 48:8 93:9 44
T15 3 94:8 96:9 1
here is the filtered file:
$cat filtered_file
12 4046580009982686 05072021 24692161126100379438583 44442
54 4046580009982686 05072021 24692161126100379438583 44442
95 4046580009982686 05072021 24692161126100379438583 44442
In this script, I am trying to match a string in the filtered file using line number from the original file.
bash-4.1$ vi comb_rt_ct.ksh
#!/bin/bash
rows=1
cols=$(($(cat rt_facts | wc -l) + 1))
declare -a arr #=( $(for i in {1..$cols}; do echo 0; done) )
echo "cols " $cols
for l_num in $(grep '4046580009982686 05072021 24692161126100379438583 44442' filtered_file | cut -d" " -f 1)
do
arr[0]="4046580009982686 05072021 24692161126100379438583 44442"
echo "l_num " $l_num
cat rt_facts | while IFS=" " read -r lineStr
do
line=( $lineStr )
#echo ${line[*]}
rt_h=$(echo ${line[2]} | cut -d":" -f 1)
rt_f=$(echo ${line[3]} | cut -d":" -f 1)
if (( l_num > $rt_h && l_num < $rt_f )); then
echo "rt_h rt_f " $rt_h " " $rt_f
echo "line[*] " ${line[*]}
i=${line[1]}
echo "i " $i
if [[ -z "${arr[$i]}" ]]; then
echo "empty"
arr[$i]=0
fi
(( arr[$i]++ ))
echo "arr[$i] "${arr[$i]}
#echo ${line[0]}
break
fi
done
echo
done
echo ${arr[#]}
echo ${arr[*]}
echo ${arr[2]}
Here is the output when I run the script:
bash-4.1$ sh comb_rt_ct.ksh
cols 4
l_num 12
rt_h rt_f 2 47
line[*] T5 1 2:8 47:9 44
i 1
empty
arr[1] 1
l_num 54
rt_h rt_f 48 93
line[*] T14 2 48:8 93:9 44
i 2
empty
arr[2] 1
l_num 95
rt_h rt_f 94 96
line[*] T15 3 94:8 96:9 1
i 3
empty
arr[3] 1
4046580009982686 05072021 24692161126100379438583 44442
4046580009982686 05072021 24692161126100379438583 44442
bash-4.1$
As #glennjackman has pointed out, the question needs to be updated to provide a minimal, reproducible example; so I'm skipping trying to reverse engineer the intent of the script (or figure out the content of the file named filtered_file) and instead focus on one item of interest:
cat t_facts | while IFS=" " read
...
<assign_values_to_array>
...
done
The | while ... invokes a subshell; new values are assigned to the arr[] array within this subshell.
In ksh, when exiting the subshell, the newly updated arr[] values are (effectively) passed to the calling/parent process.
In bash, when exiting the subshell, the newly updated arr[] values are discarded when returning to the calling/parent process.
Because OP's code is running under bash (#!/bin/bash), the changes to arr[] are lost upon exiting the subshell; this in turn means the calling/parent process 'still' has the same initial value in arr[], namely:
arr[0]="4046580009982686 05072021 24692161126100379438583 44442"
One method to address this issue, which should work in ksh and bash:
while IFS=" " read
...
<assign_value_to_array>
...
done < t_facts
This eliminates the subshell invocation and therefore allows the new array assignments to be made in the main process.
Again, I haven't attempted to understand the overall logic of the script, but fwiw ... I'd suggest OP make the suggested code change to the while loop (to eliminate the subshell invocation) to see if the arr[] array is populated with the desired results.

Fetching data into an array

I have a file like this below:
-bash-4.2$ cat a1.txt
0 10.95.187.87 5444 up 0.333333 primary 0 false 0
1 10.95.187.88 5444 up 0.333333 standby 1 true 0
2 10.95.187.89 5444 up 0.333333 standby 0 false 0
I want to fetch the data from the above file into a 2D array.
Can you please help me with a suitable way to put into an array.
Also post putting we need put a condition to check whether the value in the 4th column is UP or DOWN. If it's UP then OK, if its down then below command needs to be executed.
-bash-4.2$ pcp_attach_node -w -U pcpuser -h localhost -p 9898 0
(The value at the end is getting fetched from the 1st column.
You could try something like that:
while read -r line; do
declare -a array=( $line ) # use IFS
echo "${array[0]}"
echo "${array[1]}" # and so on
if [[ "$array[3]" ]]; then
echo execute command...
fi
done < a1.txt
Or:
while read -r -a array; do
if [[ "$array[3]" ]]; then
echo execute command...
fi
done < a1.txt
This works only if field are space separated (any kind of space).
You could probably mix that with regexp if you need more precise control of the format.
Firstly, I don't think you can have 2D arrays in bash. But you can however store lines into a 1-D array.
Here is a script ,parse1a.sh, to demonstrate emulation of 2D arrays for the type of data you included:
#!/bin/bash
function get_element () {
line=${ARRAY[$1]}
echo $line | awk "{print \$$(($2+1))}" #+1 since awk is one-based
}
function set_element () {
line=${ARRAY[$1]}
declare -a SUBARRAY=($line)
SUBARRAY[$(($2))]=$3
ARRAY[$1]="${SUBARRAY[#]}"
}
ARRAY=()
while IFS='' read -r line || [[ -n "$line" ]]; do
#echo $line
ARRAY+=("$line")
done < "$1"
echo "Full array contents printout:"
printf "%s\n" "${ARRAY[#]}" # Full array contents printout.
for line in "${ARRAY[#]}"; do
#echo $line
if [ "$(echo $line | awk '{print $4}')" == "down" ]; then
echo "Replace this with what to do for down"
else
echo "...and any action for up - if required"
fi
done
echo "Element access of [2,3]:"
echo "get_element 2 3 : "
get_element 2 3
echo "set_element 2 3 left: "
set_element 2 3 left
echo "get_element 2 3 : "
get_element 2 3
echo "Full array contents printout:"
printf "%s\n" "${ARRAY[#]}" # Full array contents printout.
It can be executed by:
./parsea1 a1.txt
Hope this is close to what you are looking for. Note that this code will loose all indenting spaces during manipulation, but a formatted update of the lines could solve that.

Bash function with array won't work

I am trying to write a function in bash but it won't work. The function is as follows, it gets a file in the format of:
1 2 first 3
4 5 second 6
...
I'm trying to access only the strings in the 3rd word in every line and to fill the array "arr" with them, without repeating identical strings.
When I activated the "echo" command right after the for loop, it printed only the first string in every iteration (in the above case "first").
Thank you!
function storeDevNames {
n=0
b=0
while read line; do
line=$line
tempArr=( $line )
name=${tempArr[2]}
for i in $arr ; do
#echo ${arr[i]}
if [ "${arr[i]}" == "$name" ]; then
b=1
break
fi
done
if [ "$b" -eq 0 ]; then
arr[n]=$name
n=$(($n+1))
fi
b=0
done < $1
}
The following line seems suspicious
for i in $arr ; do
I changed it as follows and it works for me:
#! /bin/bash
function storeDevNames {
n=0
b=0
while read line; do
# line=$line # ?!
tempArr=( $line )
name=${tempArr[2]}
for i in "${arr[#]}" ; do
if [ "$i" == "$name" ]; then
b=1
break
fi
done
if [ "$b" -eq 0 ]; then
arr[n]=$name
(( n++ ))
fi
b=0
done
}
storeDevNames < <(cat <<EOF
1 2 first 3
4 5 second 6
7 8 first 9
10 11 third 12
13 14 second 15
EOF
)
echo "${arr[#]}"
You can replace all of your read block with:
arr=( $(awk '{print $3}' <"$1" | sort | uniq) )
This will fill arr with only unique names from the 3rd word such as first, second, ... This will reduce the entire function to:
function storeDevNames {
arr=( $(awk '{print $3}' <"$1" | sort | uniq) )
}
Note: this will provide a list of all unique device names in sorted order. Removing duplicates also destroys the original order. If preserving the order accept where duplicates are removed, see 4ae1e1's alternative.
You're using the wrong tool. awk is designed for this kind of job.
awk '{ if (!seen[$3]++) print $3 }' <"$1"
This one-liner prints the third column of each line, removing duplicates along the way while preserving the order of lines (only the first occurrence of each unique string is printed). sort | uniq, on the other hand, breaks the original order of lines. This one-liner is also faster than using sort | uniq (for large files, which doesn't seem to be applicable in OP's case), since this one-liner linearly scans the file once, while sort is obviously much more expensive.
As an example, for an input file with contents
1 2 first 3
4 5 second 6
7 8 third 9
10 11 second 12
13 14 fourth 15
the above awk one-liner gives you
first
second
third
fourth
To put the results in an array:
arr=( $(awk '{ if (!seen[$3]++) print $3 }' <"$1") )
Then echo ${arr[#]} will give you first second third fourth.

Resources