awk array that overtypes itself when printed - arrays

this is my first question so please let me know if I miss anything.
This is an awk script that uses arrays to make key-value pairs.
I have a file that has a header information separated by colons. The data is below it and separated by colons as well. My goal is to make key-value pairs that print out to a new file. I have everything set to be placed in arrays and it prints out almost perfectly.
Here is the input:
...:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number
...:iqn.1111-00.com.abc:2222.blah01.blah01node00::11BLAH00:::
Here is the code:
#!/bin/awk -f
BEGIN {
FS = ":"
}
{
x = 1
if (NR==1) {
num_fields = NF ###This is done incase there are uneven head fields to data fields###
while (x <= num_fields) {
head[x] = $x
x++
}
}
y = 2
while (y <= NR) {
if (NR==y) {
x = 1
while (x <= num_fields) {
data[x] = $x
x++
}
x = 1
while (x <= num_fields) {
print head[x]"="data[x]
x++
}
}
y++
}
}
END {
print "This is the end of the arrays and the beginning of the test"
print head[16]
print "I am head[16]-"head[16]"- and now I'm going to overwrite everything"
print "I am data[16]-"data[16]"- and I will not overwrite everything, also there isn't any data in data[16]"
}
Here is the output:
...
iscsi_name=iqn.1111-00.com.abc
iscsi_alias=2222.blah01.blah01node00
panel_name=
enclosure_id=11BLAH00
canister_id=
=nclosure_serial_number ### Here is my issue ###
This is the end of the arrays and the beginning of the test
enclosure_serial_number
- and now I'm going to overwrite everything
I am data[16]-- and I will not overwrite everything, also there isn't any data in data[16]
NOTE: data[16] is not at the end of a line, for some reason, there is an extra colon on the data lines, hence the num_fields note above
Why does head[16] overwrite itself? Is it that there is a newline (\n) at the end of the field? If so, how do I get rid of it? I have tried adding subtracting the last character, no luck. I have tried to limit the number of characters the array can take in on that field, no luck. I have tried many more ideas, no luck.
Full Disclosure: I am relatively new to all of this, I might have messed up these previous fixes!
Does anyone have any ideas as to why this is happening?
Thanks!
-cheezter88

your script is unnecessarily complex. If you want to adjust the record size with the first row, do it so.
(I replaced "..." prefix with "x")
awk -F: 'NR==1 {n=split($0,h); next} # populate header fields and record size
NR==2 {for(i=1;i<=n;i++) # do the assignment up to header size
print h[i]"="$i}' file
x=x
iscsi_name=iqn.1111-00.com.abc
iscsi_alias=2222.blah01.blah01node00
panel_name=
enclosure_id=11BLAH00
canister_id=
enclosure_serial_number=
if you want to do this for the rest of the records, remove the NR==2 condition,

Related

how to get a particular block in an array get copied in perl

I have details like below in an array. There will be plenty of testbed details in actual case. I want to grep a particular testbed(TESTBED = vApp_eprapot_icr) and an infomation like below should get copied to another array. How can I do it using perl ? End of Testbed info can be understood by a closing flower bracket }.
TESTBED = vApp_eprapot_icr {
DEVICE = vApp_eprapot_icr-ipos1
DEVICE = vApp_eprapot_icr-ipos2
DEVICE = vApp_eprapot_icr-ipos3
DEVICE = vApp_eprapot_icr-ipos5
CARDS=1GIGE,ETHFAST
CARDS=3GIGE,ETHFAST
CARDS=10PGIGE,ETHFAST
CARDS=20PGIGE,ETHFAST
CARDS=40PGIGE,ETHFAST
CARDS=ETHFAST,ETHFAST
CARDS=10GIGE,ETHFAST
CARDS=ETH,ETHFAST
CARDS=10P10GIGE,ETHFAST
CARDS=PPA2GIGE,ETHFAST
CARDS=ETH,ETHFAST,ETHGIGE
}
I will make it simpler, please see the below array
#array("
student=Amit {
Age=20
sex=male
rollno=201
}
student=Akshaya {
Age=24
phone:88665544
sex=female
rollno=407
}
student=Akash {
Age=23
sex=male
rollno=356
address=na
phone=88456789
}
");
Consider an array like this. Where such entries are plenty. I need to grep, for an example student=Akshaya's data. from the opening '{' to closing '}' all info should get copied to another array. This is what I'm looking for.
while (<>) {
print if /TESTBED = vApp_eprapot_icr/../\}/;
}
as a sidenote <> will capture the filename you use on cmdline. So if the data is stored in a file you will run from commandline
perl scriptname.pl filename.txt
Ok. We finally have enough information to come up with an answer. Or, at least, to produce two answers which will work on slightly different versions of your input file.
In a comment you say that you are creating your array like this:
#array = `cat $file`;
That's not a very good idea for a couple of reasons. Firstly, why run an external command like cat when Perl will read the file for you. And secondly, this gives you one element in your array for each line in your input file. Things become far easier if you arrange it so that each of your TESTBED = foo { ... } records is a single array element.
Let's get rid of the cat first. The easiest way to read a single file into an array is to use the file input operator - <>. That will read data from the file whose name is given on the command line. So if you call your program filter_records, you can call it like this:
$ ./filter_records your_input_data.txt
And then read it into an array like this:
#array = <>;
That's good, but we still have each line of the input file in its own array element. How we fix that depends on the exact format of your input file. It's easiest if there's a blank line between each record in the input file, so it looks like this:
student=Amit {
Age=20
sex=male
rollno=201
}
student=Akshaya {
Age=24
phone:88665544
sex=female
rollno=407
}
student=Akash {
Age=23
sex=male
rollno=356
address=na
phone=88456789
}
Perl has a special variable called $/ which controls how it reads records from input files. If we set it to be an empty string then Perl goes into "paragraph" mode and it uses blank lines to delimit records. So we can write code like this:
{
local $/ = '';
#array = <>;
}
Note that it's always a good idea to localise changes to Perl's special variables, which is why I have enclosed the whole thing in a naked block.
If there are no blank lines, then things get slightly harder. We'll read the whole file in and then split it.
Here's our example file with no blank lines:
student=Amit {
Age=20
sex=male
rollno=201
}
student=Akshaya {
Age=24
phone:88665544
sex=female
rollno=407
}
student=Akash {
Age=23
sex=male
rollno=356
address=na
phone=88456789
}
And here's the code we use to read that data into an array.
{
local $/;
$data = <>;
}
#array = split /(?<=^})\n/m, $data;
This time, we've set $/ to undef which means that all of the data has been read from the file. We then split the data wherever we find a newline that is preceded by a } on a line by itself.
Whichever of the two solutions above that we use, we end up with an array which (for our sample data) has three elements - one for each of the records in our data file. It's then simple to use Perl's grep to filter that array in various ways:
# All students whose names start with 'Ak'
#filtered_array = grep { /student=Ak/ } #array;
If you use similar techniques on your original data file, then you can get the records that you are interested in with code like this:
#filtered_array = grep { /TESTBED = vApp_eprapot_icr/ } #array;

How to keep track of printed items in a for loop?

I was recently dealing with a hash that I wanted to print in a nice manner.
To simplify, it is just n array with two fields a['name']="me", a['age']=77 and the data I want to print like key1:value1,key2:value2,... and end with a new line. That is:
name=me,age=77
Since it is not an array whose indices are autoincremented values, I do not know how to loop through them and know when I am processing the last one.
This is important because it allows to use a different separator on the case I am in the last one. Like this, a different character can be printed in this case (new line) instead of the one that is printed after the rest of the files (comma).
I ended up using a counter to compare to the length of the array:
awk 'BEGIN {a["name"]="me"; a["age"]=77;
n = length(a);
for (i in a) {
count++;
printf "%s=%s%s", i, a[i], (count<n?",":ORS)
}
}'
This works well. However, is there any other better way to handle this? I don't like the fact of adding an extra count++.
In general when you know the end point of the loop you put the OFS or ORS after each field:
for (i=1; i<=n; i++) {
printf "%s%s", $i, (i<n?OFS:ORS)
}
but if you don't then you put the OFS before the second and subsequent fields and print the ORS after the loop:
for (idx in array) {
printf "%s%s", (++i>1?OFS:""), array[idx]
}
print ""
I do like the:
n = length(array)
for (idx in array) {
printf "%s%s", array[idx], (++i<n?OFS:ORS)
}
idea to get the end of the loop too, but length(array) is gawk-specific and the resulting code isn't any more concise or efficient than the 2nd loop above:
$ cat tst.awk
BEGIN {
OFS = ","
array["name"] = "me"
array["age"] = 77
for (idx in array) {
printf "%s%s=%s", (++i>1?OFS:""), array[idx], idx
}
print ""
}
vs
$ cat tst.awk
BEGIN {
OFS = ","
array["name"] = "me"
array["age"] = 77
n = length(array) # or non-gawk: for (idx in array) n++
for (idx in array) {
printf "%s=%s%s", array[idx], idx, (++i<n?OFS:ORS)
}
}

Find and replace in AIX 5.3

I am running AIX 5.3.
I have two flat text files.
One is a "master" list of network devices, along with their communication settings(CLLIFile.tbl).
The other is a list of specific network devices that need to have one setting changed, within the main file(specifically, cn to le). The list file is called DDM2000-030215.txt.
I have gotten as far as looping through DDM2000-030215.txt, pulling the lines I need to change with grep from CLLIFile.tbl, changing cn to le with sed, and sending the output to a file.
The trouble is, all I get are the changed lines. I need to make the changes inside CLLIFile.tbl, because I cannot disturb the formatting or structure.
Here's what we tried, so far:
for i in 'DDM2000-030215.txt'
do
grep -p $ii CLLIFile.tbl| sed s/cn/le/g >> CLLIFileNew.tbl
done
Basically, I need to replace all instances of 'le' with 'cn', within 'CLLIFile.tbl', that are on lines that contain a network element name from 'DDM2000-030215.txt'.
Your sed (on AIX) will not have an -i option (edit the input file),
and you do not want to use a temporary file.
You can try a here construction with vi:
vi CLLIFile.tbl >/dev/null <<END
:1,$ s/cn/le/g
:wq
END
You don't want grep here, because, as you've observed, it only outputs the matching lines. You want to just use sed and have it do the replacement only on the lines that match while passing the other lines through unchanged.
So instead of this:
grep 'pattern' | sed 's/old/new/'
just do this:
sed '/pattern/s/old/new/'
You will have to send the output into a new file, and then move that new file into place to replace the old CLLIfile.tbl. Something like this:
cp CLLIfile.tbl CLLIfile.tbl.bak # make a backup in case something goes awry
sed '/pattern/s/old/new/' CLLIfile.tbl >newclli && mv newclli CLLIfile.tbl
EDIT: Entirely new question, I see. For this, I would use awk:
awk 'NR == FNR { a[++n] = $0; next } { for(i = 1; i <= n; ++i) { if($0 ~ a[i]) { gsub(/cn/, "le"); break } } print }' DDM2000-030215.txt CLLIFile.txt
This works as follows:
NR == FNR { # when processing the first file
# (DDM2000-030215.txt)
a[++n] = $0 # remember the tokens. This assumes that every
# full line of the file is a search token.
next # That is all.
}
{ # when processing the second file (CLLIFile.tbl)
for(i = 1; i <= n; ++i) { # check all remembered tokens
if($0 ~ a[i]) { # if the line matches one
gsub(/cn/, "le") # replace cn with le
break # and break out of the loop, because that only
# needs to be done once.
}
}
print # print the line, whether it was changed or not.
}
Note that if the contents of DDM2000-030215.txt are to be interpreted as fixed strings rather than regexes, you should use index($0, a[i]) instead of $0 ~ a[i] in the check.

matrix from data with awk

Warning, not an awk programmer.
I have a file, let's call it file.txt. It has a list of numbers which I will be using to find the information I need from the rest of the directory (which is full of files *.asc). The remaining files do not have the same lengths, but since I will be drawing data based on file.txt, the matrix I will be building will have the same number of rows. All files DO however contain the same number of columns, 3. The first column will be compared to file.txt, the second column of each *.asc file will be used to build the matrix. Here is what I have so far:
awk '
NR==FNR{
A[$1];
next}
$1 in A
{print $2 >> "data.txt";}' file.txt *.asc
This, however, prints the information from each file below the previous file. I want the information side by side, like a matrix. I looked up paste, but it seems to be called before awk, and all examples were only of a couple of files. I tried it still in place of print and did not work.
If anyone could help me out, this would be the last piece to my project. Thanks so much!
You could try:
awk -f ext.awk file.txt *.asc > data.txt
where ext.awk is
NR==FNR {
A[$1]++
next
}
FNR==1 {
if (ARGIND > 2)
print ""
}
$1 in A {
printf "%s ", $2
}
END {
print ""
}
Update
If you do not have Gnu Awk, the ARGIND variable is not available. You could then try
NR==FNR {
A[$1]++
next
}
FNR==1 {
if (++ai > 1)
print ""
}
$1 in A {
printf "%s ", $2
}
END {
print ""
}

Awk command to get search records with number of occurence of search pattern

awk 'FNR==NR { ! a[$0]++ ; next }
{ b[$0]++ }
END {
for (i in a) {
for (k in b) {
if (a[i]==1 && i ~ k ) { print i }
}
}
}' file1 file2
The above awk script program helped me out to get the search criteria from one file and accordingly to that search pattern i am able to get the record from other file. But from this script it is taking unique search record, if the same content is exist twice in file than also it search and print only once. I want the repeated record also to get the count of occurence of that record in the file.
From your post I gather that the array 'a' is storing all the records and array 'b' is storing all the regular expression search patterns.
Just change your if statement to:
if ( i ~ k ) { print i, a[i] } #a[i] prints the count of the record

Resources