Related
This question already has answers here:
How to format a bash array as a JSON array
(6 answers)
Closed 5 months ago.
I am trying to write a bash script that generates uuids and writes that to a file, in a json array format.
I have written a simple code that generates the id and writes to a file but the issues i am running into are;
here is my own implementation
function uud {
for ((i=1;i<=$1;i++));
do
echo "\`uuidgen\`", >> $file
done
}
for file in file1.json file2.json;
do
$( uud $1)
done
I am having issues with converting the ids into strings.
Converting the results to arrays.
Currently my solution prints the ids into the file in this format
caca8fef-42d6-4b21-9d6b-0e40d348bd53,
e6e12fb5-4304-4ba9-b895-931bb4b58fbf,
df699ecd-d887-413e-8383-2a98ac2fa22f,
and this is what i want to acheive, How can I go about getting this result?
[
"caca8fef-42d6-4b21-9d6b-0e40d348bd53",
"e6e12fb5-4304-4ba9-b895-931bb4b58fbf",
"df699ecd-d887-413e-8383-2a98ac2fa22f"
]
Use jq to generate the JSON.
uud () {
for ((i=0; i< $1; i++)); do
uuidgen
done
}
for file in file1.json file2.json; do
uud 5 | jq -Rs 'rtrimstr("\n") | split("\n")' > "$file"
done
-Rs reads the entire input into a single JSON string, which you then split (after removing the trailing newline) on newlines to produce the desired array.
One blogger suggests using two jq processes, which is more expensive but arguably simpler:
for file in file1.json file2.json; do
uud | jq -R . | jq -s . > "$file"
done
I have adjusted slightly your script, try it:
function uud {
echo "[" > $file
for ((i=1;i<$1;i++));
do
echo \"$(uuidgen)\", >> $file
done
echo \"$(uuidgen)\" >> $file # last item in array without comma
echo "]" >> $file
}
for file in file1.json file2.json;
do
$( uud $1)
done
Or execute it here and check the result.
After you've run your existing loop...
#!/bin/sh
cat > ed1 <<EOF
%s/^/"/g
%s/$/"/g
$s/,//
1i
[
.
$a
]
.
wq
EOF
ed -s file < ed1
I want a script that reads each row of a CSV file which is called sample.csv and it counts the number of fields of each row and if the number is more than a threshold (here is 14) it stores the whole of that line or just two fields of that line in another file (Hello.bsd) the script which I wrote is as below:
while read -r line
do
echo "$line" > tmp.kk
count= $(awk -F, '{ print NF; exit }' ~/tmp.kk)
if [ "$count" -gt 14 ]; then
field1=$(echo "$line" | awk -F',' '{printf "%s", $1}' | tr -d ',')
field2=$(echo "$line" | awk -F',' '{printf "%s", $2}' | tr -d ',')
echo "$field1 $field2" >> Hello.bsd
fi
done < ~/sample.csv
there is no output for the above code.
I would be so grateful if you could help me in this regard.
Best regards,
sina
FOR JUST FIRST 2 FIELDS
< sample.csv |
mawk 'NF=(_=(+__<NF))+_' FS=',' __="14" # enter constant or shell variable
SAMPLE OUTPUT
echo "${a}"
04z,Y7N,=TT,WLq,n54,cb8,qfy,LLG,ria,hIQ,Mmd,8N2,FK=,7a9,
us6,ck6,LvI,tnY,CQm,wBp,gPH,8ly,JAH,Phv,uwm,x1r,MF1,ide,
03I,GEs,Mok,BxK,z2D,IUH,VWn,Zb7,TkP,Ddt,RE9,mv2,XyD,tr5,
A2t,u0z,MLi,3RF,es1,goz,G0S,l=h,8Ka,coN,vHP,snk,tTV,xNF,
RiU,yBI,QrS,N6D,fWG,oOr,CwZ,9lb,f8h,g5I,c1u,D3X,kOo,lKG,
CSj,da4,Y54,S7R,AEj,Vqx,Fem,sqn,l4Z,YEA,OKe,6Bu,0xU,hGc,
1X8,jUD,XZM,pMc,Q6V,piz,6jp,SJp,E3W,zgJ,BuW,5wd,qVg,wBy,
TQC,O9k,RJ9,fie,2AV,XZ4,meR,tEC,U7v,JWH,LTs,ngF,3A3,ZPa,
ONJ,Phw,jrp,UvY,9Kb,qxf,57f,yHo,a0Q,2S=,=Ob,l1b,XjC
echo "${a}" | mawk 'NF=(_=(+__<NF))+_' FS=',' __="14"
04z Y7N
us6 ck6
03I GEs
A2t u0z
RiU yBI
CSj da4
1X8 jUD
TQC O9k
note that the last line didn't print because it didn't meet the NF threshold
In text file every line include some number of words. It looks like
split time not big
every cash flu green big
numer note word
swing crash car out fly sweet
How to split those lines and store it in array? I need to do with array something like this
for i in $file
do
echo "$array[0]"
echo "$array[2]"
done
Can anyone help?
You can read the file line by line with read and assign the line to an array. It's rather fragile and might break depending on the file content.
while read line; do
array=( $line )
echo "${array[0]}"
echo "${array[2]}"
done < file
A better way to parse text file is to use awk:
awk '{print $1; print $3}' file
I don't see why you need an array, then. You could just do this:
while IFS= read -r line; do
read -r item1 item2 item3 <<< "$line"
printf '%s\n%s\n' "$item1" "$item3"
done < "$file"
But if you want to, you can make read give you an array, too:
read -ra array <<< "$line"
printf '%s\n%s\n' "${array[0]}" "${array[2]}"
The below script:
#!/bin/bash
otscurrent="
AAA,33854,4528,38382,12
BBB,83917,12296,96213,13
CCC,20399,5396,25795,21
DDD,27198,4884,32082,15
EEE,2472,981,3453,28
FFF,3207,851,4058,21
GGG,30621,4595,35216,13
HHH,8450,1504,9954,15
III,4963,2157,7120,30
JJJ,51,59,110,54
KKK,87,123,210,59
LLL,573,144,717,20
MMM,617,1841,2458,75
NNN,234,76,310,25
OOO,12433,1908,14341,13
PPP,10627,1428,12055,12
QQQ,510,514,1024,50
RRR,1361,687,2048,34
SSS,1,24,25,96
TTT,0,5,5,100
UUU,294,1606,1900,85
"
IFS="," array1=(${otscurrent})
echo ${array1[4]}
Prints:
$ ./test.sh
12
BBB
I'm trying to get it to just print 12... And I am not even sure how to make it just print row 5 column 4
The variable is an output of a sqlquery that has been parsed with several sed commands to change the formatting to csv.
otscurrent="$(sqlplus64 user/password#dbserverip/db as sysdba #query.sql |
sed '1,11d; /^-/d; s/[[:space:]]\{1,\}/,/g; $d' |
sed '$d'|sed '$d'|sed '$d' | sed '$d' |
sed 's/Used,MB/Used MB/g' |
sed 's/Free,MB/Free MB/g' |
sed 's/Total,MB/Total MB/g' |
sed 's/Pct.,Free/Pct. Free/g' |
sed '1b;/^Name/d' |
sed '/^$/d'
)"
Ultimately I would like to be able to call on a row and column and run statements on the values.
Initially i was piping that into :
awk -F "," 'NR>1{ if($5 < 10) { printf "%-30s%-10s%-10s%-10s%-10s\n", $1,$2,$3,$4,$5"%"; } else { echo "Nothing to do" } }')"
Which works but I couldn't run commands from if else ... or atleaste I didn't know how.
If you have bash 4.0 or newer, an associative array is an appropriate way to store data in this kind of form.
otscurrent=${otscurrent#$'\n'} # strip leading newline present in your sample data
declare -A data=( )
row=0
while IFS=, read -r -a line; do
for idx in "${!line[#]}"; do
data["$row,$idx"]=${line[$idx]}
done
(( row += 1 ))
done <<<"$otscurrent"
This lets you access each individual item:
echo "${data[0,0]}" # first field of first line
echo "${data[9,0]}" # first field of tenth line
echo "${data[9,1]}" # second field of tenth line
"I'm trying to get it to just print 12..."
The issue is that IFS="," splits on commas and there is no comma between 12 and BBB. If you want those to be separate elements, add a newline to IFS. Thus, replace:
IFS="," array1=(${otscurrent})
With:
IFS=$',\n' array1=(${otscurrent})
Output:
$ bash test.sh
12
All you need to print the value of the 4th column on the 5th row is:
$ awk -F, 'NR==5{print $4}' <<< "$otscurrent"
3453
and just remember that in awk row (record) and column (field) numbers start at 1, not 0. Some more examples:
$ awk -F, 'NR==1{print $5}' <<< "$otscurrent"
12
$ awk -F, 'NR==2{print $1}' <<< "$otscurrent"
BBB
$ awk -F, '$5 > 50' <<< "$otscurrent"
JJJ,51,59,110,54
KKK,87,123,210,59
MMM,617,1841,2458,75
SSS,1,24,25,96
TTT,0,5,5,100
UUU,294,1606,1900,85
If you'd like to avoid all of the complexity and simply parse your SQL output to produce what you want without 20 sed commands in between, post a new question showing the raw sqlplus output as the input and what you want finally output and someone will post a brief, clear, simple, efficient awk script to do it all at one time, or maybe 2 commands if you still want an intermediate CSV for some reason.
I'm trying to create an array in bash from a file with the following sample format:
data, data, interesting
data, data, more interesting
The way I'm populating the arrays:
read -r -a DATA1 <<< $(cat $FILE | awk -F, '{ print $1 }')
read -r -a DATA2 <<< $(cat $FILE | awk -F, '{ print $2 }')
read -r -a DATA3 <<< $(cat $FILE | awk -F, '{ print $3 }')
When I examine the array DATA3, there are 3 elements:
interesting
more
interesting
I need it to show only 2 elements like:
interesting
more interesting
How can I preserve the white space in field 3 so when I call that element from the array, it appears as "more interesting"? Is there a better way to handle this?
The key is to use the IFS (internal field separators) variable together with read.
read can read words directly into an array using the -a option. Set IFS=, and it will split at comma:
while read f; do
echo "$f" | (IFS=, && read -a arr && echo ${arr[2]})
done
will echo
interesting
more interesting
You can also read directly into variable names:
IFS=, && read f1 f2 f2
EDIT: I can recommend reading Advanced Bash Scripting Guide.
Use cut. Example:
read -r -a DATA1 <<< $(cut -d, -f 1 $FILE)
read -r -a DATA2 <<< $(cut -d, -f 2 $FILE)
read -r -a DATA3 <<< $(cut -d, -f 3 $FILE)
IFS=','
while read -a array
do
echo "${array[2]}"
done < data.csv
>> Is there a better way to handle this? - Tee Bone
$ awk -F", " '{print $3}' file
interesting
more interesting