awk split string on commas ignore if inside double quotes - arrays

I know it may sounds that there are 2000 answer to this question online but I found none for this specific case (ex. -vFPAT of this and other answers) cause I need to be with split. I have to split a CSV file with awk in which there may be some values inside double quotes. I need to tell the split function to ignore , if inside "" in order to get an array of the elements.
Here what I tried based on other answers as example
cat try.txt
Hi,I,"am,your",father
maybe,you,knew,it
but,"I,wanted",to,"be,sure"
cat tst.awk
BEGIN {}
{
n_a = split($0,a,/([^,]*)|("[^"]+")/);
for (i=1; i<=n_a; i++) {
collecter[NR][i]=a[i];
}
}
END {
for (i=1; i<=length(collecter); i++)
{
for (z=1; z<=length(collecter[i]);z++)
{
printf "%s\n", collecter[i][z];
}
}
}
but no luck:
awk -f tst.awk try.txt
,
,
,
,
,
,
,
,
,
I tried other regex expression based on other similar answer but none works for this particular case.
Please note: double quoted fields mat and may not be present, may be more than one, and without fixed position/length!
Thanks in advance for any help!

gnu awk has a function called patsplit that lets you do a split using an FPAT pattern:
$ awk '{ print "RECORD " NR ":"; n=patsplit($0, a, "([^,]*)|(\"[^\"]+\")"); for (i=1;i<=n;++i) {print i, "|" a[i] "|"}}' file
RECORD 1:
1 |Hi|
2 |I|
3 |"am,your"|
4 |father|
RECORD 2:
1 |maybe|
2 |you|
3 |knew|
4 |it|
RECORD 3:
1 |but|
2 |"I,wanted"|
3 |to|
4 |"be,sure"|

If Python is an alternative, here is a solution:
try.txt:
Hi,I,"am,your",father
maybe,you,knew,it
but,"I,wanted",to,"be,sure"
Python snippet:
import csv
with open('try.txt') as f:
reader = csv.reader(f, quoting=csv.QUOTE_ALL)
for row in reader:
print(row)
The code snippet above will result in:
['Hi', 'I', 'am,your', 'father']
['maybe', 'you', 'knew', 'it']
['but', 'I,wanted', 'to', 'be,sure']

Related

awk calculate euclidean distance results in wrong output

I have this small geo location dataset.
37.9636140,23.7261360
37.9440840,23.7001760
37.9637190,23.7258230
37.9901450,23.7298770
From a random location.
For example this one 37.97570, 23.66721
I need to create a bash command with awk that returns the distances with simple euclidean distance.
This is the command i use
awk -v OFMT=%.17g -F',' -v long=37.97570 -v lat=23.66721 '{for (i=1;i<=NR;i++) distances[i]=sqrt(($1 - long)^2 + ($2 - lat)^2 ); a[i]=$1; b[i]=$2} END {for (i in distances) print distances[i], a[i], b[i]}' filename
When I run this command i get this weird result which is not correct, could someone explain to me what am I doing wrong?
➜ awk -v OFMT=%.17g -F',' -v long=37.97570 -v lat=23.66721 '{for (i=1;i<=NR;i++) distances[i]=sqrt(($1 - long)^2 + ($2 - lat)^2 ); a[i]=$1; b[i]=$2} END {for (i in distances) print distances[i], a[i], b[i]}' filename
44,746962127881936 37.9440840 23.7001760
44,746962127881936 37.9901450 23.7298770
44,746962127881936 37.9636140 23.7261360
44,746962127881936
44,746962127881936 37.9637190 23.7258230
Updated.
Appended the command that #jas provided, I included od -c as #mark-fuso suggetsted.
The issue now is that I get different results from #jas
Command output which showcases the new issue.
awk -v OFMT=%.17g -F, -v long=37.97570 -v lat=23.66721 '
{distance=sqrt(($1 - long)^2 + ($2 - lat)^2 ); print distance, $1, $2}
' file
1,1820150904705098 37.9636140 23.7261360
1,1820150904705098 37.9440840 23.7001760
1,1820150904705098 37.9637190 23.7258230
1,1820150904705098 37.9901450 23.7298770
od -c that shows the content of the input file.
od -c file
0000000 3 7 . 9 6 3 6 1 4 0 , 2 3 . 7 2
0000020 6 1 3 6 0 \n 3 7 . 9 4 4 0 8 4 0
0000040 , 2 3 . 7 0 0 1 7 6 0 \n 3 7 . 9
0000060 6 3 7 1 9 0 , 2 3 . 7 2 5 8 2 3
0000100 0 \n 3 7 . 9 9 0 1 4 5 0 , 2 3 .
0000120 7 2 9 8 7 7 0 \n
0000130
While #jas has provided a 'fix' for the problem, thought I'd throw in a few comments about what OP's code is doing ...
Some basics ...
the awk program ({for (i=1;i<=NR;i++) ... ; b[i]=$2}) is applied against each row of the input file
as each row is read from the input file the awk variable NR keeps track of the row number (ie, NR=1 for the first row, NR=2 for the second row, etc)
on the last pass through the for loop the counter (i in this case) will have a value of NR+1 (ie, the i++ is applied on the last pass through the loop thus leaving i=NR+1)
unless there are conditional checks for each line of input the awk program will apply against every line from the input file (including blank lines - more on this below)
for (i in distances)... isn't guaranteed to process the array indices in numerical order
The awk/for loop is doing the following:
for the 1st input row (NR=1) we get for (i=1;i<=1;i++) ...
for the 2nd input row (NR=2) we get for (i=1;i<=2;i++) ...
for the 3rd input row (NR=3) we get for (i=1;i<=3;i++) ...
for the 4th input row (NR=4) we get for (i=1;i<=4;i++) ...
For each row processed by awk the program will overwrite all previous entries in the distance[] array; net result is the last row (NR=4) will place the same values in all 4 entries of the the distance[] array.
The a[i]=$1; b[i]=$2 array assignments occur outside the scope of the for loop so these will be assigned once per input row (ie, will not be overwritten) however, the array assignments are being made with i=NR+1; net result is the contents of the 1st row (NR=1) are stored in array entries a[2] and b[2], the contents of the 2nd row (NR=2) are stored in array entries a[3] and a[3], etc.
Modifying OP's code with print i, distances[i], a[i], b[i]} and running against the 4-line input file I get:
1 0.064310270672728084 # no data for 2nd/3rd columns because a[1] and b[1] are never set
2 0.064310270672728084 37.9636140 23.7261360 # 2nd/3rd columns are from 1st row of input
3 0.064310270672728084 37.9440840 23.7001760 # 2nd/3rd columns are from 2nd row of input
4 0.064310270672728084 37.9637190 23.7258230 # 2nd/3rd columns are from 3rd row of input
From this we can see the first column of output is the same (ie, distance[1]=distance[2]=distance[3]=distance[4]), while the 2nd and 3rd columns are the same as the input columns except they are shifted 'down' by one row.
That leaves us with two outstanding issues ...
why does OP show 5 lines of output?
why is the first column consist of the garbage 44,746962127881936?
I was able to reproduce this issue by adding a blank line on the end of my input file:
$ cat geo.dat
37.9636140,23.7261360
37.9440840,23.7001760
37.9637190,23.7258230
37.9901450,23.7298770
<<=== blank line !!
Which generates the following with OP's awk code:
44.746962127881936
44.746962127881936 37.9636140 23.7261360
44.746962127881936 37.9440840 23.7001760
44.746962127881936 37.9637190 23.7258230
44.746962127881936 37.9901450 23.7298770
NOTES:
this order is different from OP's sample output and is likely due to OP's awk version not processing for (i in distances)... in numerical order; OP can try something like for (i=1;i<=NR;i++)... or for (i=1;i in distances; i++)... (though the latter will not work correcly for a sparsely populated array)
OPs output (in the question; in comment to #jas' answer) shows a comma (,) in place of the period (.) for the first column so I'm guessing OP's env is using a locale that switches the comma/period as thousands/decimal delimiter (though the input data is based on an 'opposite' locale)
Notice we finally get to see the data from the 4th line of input (shifted 'down' and displayed on line 5) but the first column has what appears to be a nonsensical value ... which can be tracked back to applying the following against a blank line:
sqrt(($1 - long)^2 + ($2 - lat)^2 )
sqrt(( - long)^2 + ( - lat)^2 ) # empty line => $1 = $2 = undefined/empty
sqrt(( - 37.97570)^2 + ( - 23.66721^2 )
sqrt( 1442.153790 + 560.136829 )
sqrt( 2002.290619 )
44.746952... # contents of 1st column
To 'fix' this issue the OP can either a) remove the blank line from the input file or b) add some logic to the awk script to only perform calculations if the input line has (numeric) values in fields #1 & #2 (ie, $1 and $2 are not empty); it's up to the coder to decide on how much validation to apply (eg, are the fields numeric, are the fields within the bounds of legitimate long/lat values, etc).
One last design-related comment ... as demonstrated in jas' answer there is no need for any of the arrays (which in turn reduces memory usage) when all desired output can generated 'on-the-fly' while processing each line of the input file.
Awk takes care of the looping for you. The code will be run in turn for each line of the input file:
$ awk -v OFMT=%.17g -F, -v long=37.97570 -v lat=23.66721 '
{distance=sqrt(($1 - long)^2 + ($2 - lat)^2 ); print distance, $1, $2}
' file
0.060152679674309095 37.9636140 23.7261360
0.045676346307474212 37.9440840 23.7001760
0.059824979147508742 37.9637190 23.7258230
0.064310270672728084 37.9901450 23.7298770
EDIT:
OP is getting different results. I notice in OP's output that there are commas instead of decimal points when printing the distance. This points to a possible issue with the locale setting.
OP confirms that the locale was set for greek, causing the difference in output.

Merging csv file's lines with the same initial fields and sorting them by their length

I have a huge csv file with 4 fields for each line in this format (ID1, ID2, score, elem):
HELLO, WORLD, 2323, elem1
GOODBYE, BLUESKY, 3232, elem2
HELLO, WORLD, 421, elem3
GOODBYE, BLUESKY, 41134, elem4
ETC...
I would like to merge each line which has the same ID1,ID2 fields on the same line eliminating the score field, resulting in:
HELLO, WORLD, elem1, elem3.....
GOODBYE, BLUESKY, elem2, elem4.....
ETC...
where each elem come from a different line with the same ID1,ID2.
After that I would like to sort the lines on the basis of their length.
I have tried to do coding in java but is superslow. I have read online about AWK, but I can't really find a good spot where I can understand its syntax for csv files.
I used this command, how can I adapt it to my needs?
awk -F',' 'NF>1{a[$1] = a[$1]","$2}END{for(i in a){print i""a[i]}}' finale.txt > finale2.txt^C
your key should be composite, also delimiter need to be set to accommodate comma and spaces.
$ awk -F', *' -v OFS=', ' '{k=$1 OFS $2; a[k]=k in a?a[k] OFS $4:$4}
END{for(k in a) print k, a[k]}' file
GOODBYE, BLUESKY, elem2, elem4
HELLO, WORLD, elem1, elem3
Explanation
set field separator (FS) to comma followed with one or more spaces, and output field separator (OFS) to normalized form (comma and one space). Create a composite key from first two fields separated with OFS (since we're going to use it in the output). Append the fourth field to the array element indexed by key (treat first element special since we don't want to start with OFS). When all records are done (END block) print all keys and values.
To add the length keep a parallel counter and increment each time you append for each key, c[k]++ and use it when printing. That is,
$ awk -F', *' -v OFS=', ' '{k=$1 OFS $2; c[k]++; a[k]=k in a?a[k] OFS $4:$4}
END{for(k in a) print k, c[k], a[k]}' file |
sort -t, -k3n
GOODBYE, BLUESKY, 2, elem2, elem4
HELLO, WORLD, 2, elem1, elem3

Counting pattern matches for elements in awk array

I need some help in fixing my code to process a tab-delimited data set. Example data is:
#ID type
A 3
A Ct
A Ct
A chloroplast
B Ct
B Ct
B chloroplast
B chloroplast
B 4
C Ct
C Ct
C chloroplast
For each unique element in column#1, I would like to count elements that matches the pattern "Ct" and those that don't match.
So expected output is
#ID count_for_matches count_for_unmatched
A 2 2
B 2 3
C 2 1
I can get the counts to pattern matches with this
awk '$2~/Ct/{x++};$2!~/Ct/{y++}END{print x,y}
And I know I could do the processing for each item by defining column#1 as an array like
awk '{a[$1]++}END{for (i in a) print i}'
But I cannot combine both pieces for a functional code. I tried some combinations like
awk '{a[$1]++}END{for (i in a){$2~/Ct/{x++};$2!~/Ctt/{y++}}END{print i,x,y}}}'
But I am obviously making some errors and I cannot figure out based on forum answers how to fix this. Perhaps $2 values should be stored with a[$1]? Would appreciate if someone can point out the errors!
$ cat tst.awk
BEGIN { FS=OFS="\t" }
NR==1 { next }
!seen[$1]++ { keys[++numKeys] = $1 }
$2=="Ct" { matches[$1]++; next }
{ unmatched[$1]++ }
END {
print "#ID", "count_for_matches", "count_for_unmatched"
for (keyNr=1; keyNr<=numKeys; keyNr++) {
key = keys[keyNr]
print key, matches[key]+0, unmatched[key]+0
}
}
$ awk -f tst.awk file
#ID count_for_matches count_for_unmatched
A 2 2
B 2 3
C 2 1
here is another minimalist version
$ awk 'NR==1{print $1,"count_for_matches","count_for_unmatches";next}
$2=="Ct"{m[$1]++}
{a[$1]++}
END{for(k in a) print k, m[k], a[k]-m[k]}' file |
column -t
#ID count_for_matches count_for_unmatches
A 2 2
B 2 3
C 2 1

Bash formatting text file into columns

I have a text file with data in it which is set up like a table, but separated with commas, eg:
Name, Age, FavColor, Address
Bob, 18, blue, 1 Smith Street
Julie, 17, yellow, 4 John Street
Firstly I have tried using a for loop, and placing each 'column' with all its values into a separate array.
eg/ 'nameArray' would contain bob, julie.
Here is the code from my actual script, there is 12 columns hence why c should not be greater than 12.
declare -A Array
for((c = 1; c <= 12; c++))
{
for((i = 1; i <= $total_lines; i++))
{
record=$(cat $FILE | awk -F "," 'NR=='$i'{print $'$c';exit}'| tr -d ,)
Array[$c,$i]=$record
}
}
From here I then use the 'printf' function to format each array and print them as columns. The issue with this is that I have more than 3 arrays, in my actual code they're all in the same 'printf' line. Which I don't like and I know it is a silly way to do it.
for ((i = 1; i <= $total_lines; i++))
{
printf "%0s %-10s %-10s...etc \n" "${Array[1,$i]}" "${Array[2,$i]}" "${Array[3,$i]}" ...etc
}
This does however give me the desired output, see image below:
I would like to figure out how to do this another way that doesn't require a massive print statement. Also the first time I call the for loop I get an error with 'awk'.
Any advice would be appreciated, I have looked through multiple threads and posts to try and find a suitable solution but haven't found something that would be useful.
Try the column command like
column -t -s','
This is what I can get quickly. See the man page for details.

Compare file with variable list AWK

I'm stumbling over myself trying to get a seemingly simple thing accomplished. I have one file, and one newline delimited string list.
File:
Dat1 Loc1
Dat2 Loc1
Dat3 Loc1
Dat4 Loc2
Dat5 Loc2
My list is something like this:
Dat1
Dat2
Dat3
Dat4
What I am trying to do is compare the list with the data file and count the number of unique Locs that appear. I am interested only in the largest count. In the example above, when comparing the list to the file, I want essentially:
Dat1 MATCHED Loc1Count = 1
Dat2 MATCHED Loc1Count = 2
Dat3 MATCHED Loc1Count = 3
Dat4 MATCHED Loc2Count = 1
Return: Loc1 if Loc1Count/Length of List > 50%
Now,
I know that awk 1 file will read a file line by line. Furthermore I know that "echo "$LIST" | awk '/search for a line that contains this/" will return the line that matches that internal string. I haven't been able to combine these ideas successfully though as nested awks, much less how to count "loc1" vs "loc2" (which, by the way, are going to be random strings, and not form-standard)
I feel like this is simple, but I'm banging my head against a wall. Any ideas? Is this sufficiently clear?
list="Dat1 Dat2 Dat3 Dat4"
awk -vli="$list" 'BEGIN{
# here list from shell is converted to awk array "list".
m=split(li,list," ")
}
{
# go through the list
for(i=1;i<=m;i++){
if($1 == list[i]){
# if Dat? is found in list, print , at the same time
print $1" matched Locount="$2" "++data[$2] # increment the count for $2 and store in loc array
loc[$2]++
}
}
}
END{
# here returns loc1 count
loc1count=loc["Loc1"]
if(( loc1count / m *100 ) > 50) {
print "Loc1 count: "loc1count
}
} ' file
output
$ ./shell.sh
Dat1 matched Locount=Loc1 1
Dat2 matched Locount=Loc1 2
Dat3 matched Locount=Loc1 3
Dat4 matched Locount=Loc2 1
Loc1 count: 3

Resources