folks.
Currently I'm trying to parse lsblk output into array for further analysis in bash. The only problem I have is the empty columns that cause read -a to shift the fallowing values into incorrect place of the elements that are suppose to be empty. Here is the example:
# lsblk -rno MOUNTPOINT,FSTYPE,TYPE,DISC-GRAN
/usr/portage ext2 loop 4K
disk 512B
part 512B
ext2 part 512B
/ ext4 part 512B
/mnt/extended ext4 part 512B
The MOUNTPOINT and FSTYPE of second and third lines are empty as well as MOUNTPOINT of forth line, but 'traditional' read like this:
while read -r LINE; do
IFS=' ' read -a ARRAY <<< $LINE
echo "${ARRAY[0]};${ARRAY[1]};${ARRAY[2]};${ARRAY[3]}"
done < <(lsblk -rno MOUNTPOINT,FSTYPE,TYPE,DISC-GRAN)
Will produce an incorrect result with shifted columns an 0'th and 1'st element filled instead of 2nd and 3rd:
/usr/portage;ext2;loop;4K
disk;512B;;
part;512B;;
ext2;part;512B;
/;ext4;part;512B
/mnt/extended;ext4;part;512B
So, I wonder if there is some 'elegant' solution of this case or should I just do it the old way like I used to with a help of string.h in trusty C?
To be more clear, I need this values as array elements in for loop for analyses, not just to print this. Echoing is done just as an example of misbehavior.
Why don't you specify -P option to lsblk to output in the form of key="value" pairs and read the output?
Then you can say something like:
while read -ra a; do
for ((i=0; i<${#a[#]}; i++)); do
a[i]=${a[i]#*=} # extract rvalue
a[i]=${a[i]//\"/} # remove surrounding quotes
done
(IFS=';'; echo "${a[*]}")
done < <(lsblk -Pno MOUNTPOINT,FSTYPE,TYPE,DISC-GRAN)
Use awk rather than bash for this. Use NF to assign the variables depending on how many fields are in the line.
awk -v 'OFS=;' 'NF == 4 { mountpoint = $1; fstype = $2; type = $3; gran = $4 }
NF == 3 { mountpoint = ""; fstype = $1; type = $2; gran = $3 }
NF == 2 { mountpoint = ""; fstype = ""; type = $1; gran = $2 }
{print mountpoint, fstype, type, gran}' < <(lsblk -rno MOUNTPOINT,FSTYPE,TYPE,DISC-GRAN)
Related
I have a small problem in here with bash
I wrote an array in a simple function and I need to return it as an array with read command and also need to call it somehow.
function myData {
echo 'Enter the serial number of your items : '
read -a sn
return ${sn[#]}
}
for example like this ???
$ ./myapp.sh
Enter the serial number of your items : 92467 90218 94320 94382
myData
echo ${?[#]}
Why we don't have return value in here like other languages ?
thanks for your help...
As others mention, the builtin command return is intended to send the exit status to the caller.
If you want to pass the result of processing in the function to the
caller, there will be several ways:
Use standard output
If you write something to the standard output within a function, the output
is redirected to the caller. The standard output is just a non-structured
stream of bytes. If you want to make it have a special meaning such as an
array, you need to define the structure by assigning a delimiter to some
character(s). If you are sure each element do not contain space, tab, or
newline, you can rely on the default value of IFS:
myfunc() {
echo "92467 90218 94320 94382"
}
ary=( $(myfunc) )
for i in "${ary[#]}"; do
echo "$i"
done
If the elements of the array may contain whitespace or other special
characters and you need to preserve them (such a case as you are handling
filenames), you can use the null character as the delimiter:
myfunc() {
local -a a=("some" "elements" "contain whitespace" $'or \nnewline')
printf "%s\0" "${a[#]}"
}
mapfile -d "" -t ary < <(myfunc)
for i in "${ary[#]}"; do
echo ">$i" # The leading ">" just indicates the start of each element
done
Pass by reference
As other languages, bash>=4.3 has a mechanism to pass the variable by
reference or by name:
myfunc() {
local -n p="$1" # now p refers to the variable with the name of value of $1
for (( i=0; i<${#p[#]}; i++ )); do
((p[i]++)) # increment each value
done
}
ary=(0 1 2)
myfunc "ary"
echo "${ary[#]}" # array elements are modified
Use the array as a global variable
Will be needless to explain its usage and pros/cons.
Hope this helps.
this is my first question so please let me know if I miss anything.
This is an awk script that uses arrays to make key-value pairs.
I have a file that has a header information separated by colons. The data is below it and separated by colons as well. My goal is to make key-value pairs that print out to a new file. I have everything set to be placed in arrays and it prints out almost perfectly.
Here is the input:
...:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number
...:iqn.1111-00.com.abc:2222.blah01.blah01node00::11BLAH00:::
Here is the code:
#!/bin/awk -f
BEGIN {
FS = ":"
}
{
x = 1
if (NR==1) {
num_fields = NF ###This is done incase there are uneven head fields to data fields###
while (x <= num_fields) {
head[x] = $x
x++
}
}
y = 2
while (y <= NR) {
if (NR==y) {
x = 1
while (x <= num_fields) {
data[x] = $x
x++
}
x = 1
while (x <= num_fields) {
print head[x]"="data[x]
x++
}
}
y++
}
}
END {
print "This is the end of the arrays and the beginning of the test"
print head[16]
print "I am head[16]-"head[16]"- and now I'm going to overwrite everything"
print "I am data[16]-"data[16]"- and I will not overwrite everything, also there isn't any data in data[16]"
}
Here is the output:
...
iscsi_name=iqn.1111-00.com.abc
iscsi_alias=2222.blah01.blah01node00
panel_name=
enclosure_id=11BLAH00
canister_id=
=nclosure_serial_number ### Here is my issue ###
This is the end of the arrays and the beginning of the test
enclosure_serial_number
- and now I'm going to overwrite everything
I am data[16]-- and I will not overwrite everything, also there isn't any data in data[16]
NOTE: data[16] is not at the end of a line, for some reason, there is an extra colon on the data lines, hence the num_fields note above
Why does head[16] overwrite itself? Is it that there is a newline (\n) at the end of the field? If so, how do I get rid of it? I have tried adding subtracting the last character, no luck. I have tried to limit the number of characters the array can take in on that field, no luck. I have tried many more ideas, no luck.
Full Disclosure: I am relatively new to all of this, I might have messed up these previous fixes!
Does anyone have any ideas as to why this is happening?
Thanks!
-cheezter88
your script is unnecessarily complex. If you want to adjust the record size with the first row, do it so.
(I replaced "..." prefix with "x")
awk -F: 'NR==1 {n=split($0,h); next} # populate header fields and record size
NR==2 {for(i=1;i<=n;i++) # do the assignment up to header size
print h[i]"="$i}' file
x=x
iscsi_name=iqn.1111-00.com.abc
iscsi_alias=2222.blah01.blah01node00
panel_name=
enclosure_id=11BLAH00
canister_id=
enclosure_serial_number=
if you want to do this for the rest of the records, remove the NR==2 condition,
I am running AIX 5.3.
I have two flat text files.
One is a "master" list of network devices, along with their communication settings(CLLIFile.tbl).
The other is a list of specific network devices that need to have one setting changed, within the main file(specifically, cn to le). The list file is called DDM2000-030215.txt.
I have gotten as far as looping through DDM2000-030215.txt, pulling the lines I need to change with grep from CLLIFile.tbl, changing cn to le with sed, and sending the output to a file.
The trouble is, all I get are the changed lines. I need to make the changes inside CLLIFile.tbl, because I cannot disturb the formatting or structure.
Here's what we tried, so far:
for i in 'DDM2000-030215.txt'
do
grep -p $ii CLLIFile.tbl| sed s/cn/le/g >> CLLIFileNew.tbl
done
Basically, I need to replace all instances of 'le' with 'cn', within 'CLLIFile.tbl', that are on lines that contain a network element name from 'DDM2000-030215.txt'.
Your sed (on AIX) will not have an -i option (edit the input file),
and you do not want to use a temporary file.
You can try a here construction with vi:
vi CLLIFile.tbl >/dev/null <<END
:1,$ s/cn/le/g
:wq
END
You don't want grep here, because, as you've observed, it only outputs the matching lines. You want to just use sed and have it do the replacement only on the lines that match while passing the other lines through unchanged.
So instead of this:
grep 'pattern' | sed 's/old/new/'
just do this:
sed '/pattern/s/old/new/'
You will have to send the output into a new file, and then move that new file into place to replace the old CLLIfile.tbl. Something like this:
cp CLLIfile.tbl CLLIfile.tbl.bak # make a backup in case something goes awry
sed '/pattern/s/old/new/' CLLIfile.tbl >newclli && mv newclli CLLIfile.tbl
EDIT: Entirely new question, I see. For this, I would use awk:
awk 'NR == FNR { a[++n] = $0; next } { for(i = 1; i <= n; ++i) { if($0 ~ a[i]) { gsub(/cn/, "le"); break } } print }' DDM2000-030215.txt CLLIFile.txt
This works as follows:
NR == FNR { # when processing the first file
# (DDM2000-030215.txt)
a[++n] = $0 # remember the tokens. This assumes that every
# full line of the file is a search token.
next # That is all.
}
{ # when processing the second file (CLLIFile.tbl)
for(i = 1; i <= n; ++i) { # check all remembered tokens
if($0 ~ a[i]) { # if the line matches one
gsub(/cn/, "le") # replace cn with le
break # and break out of the loop, because that only
# needs to be done once.
}
}
print # print the line, whether it was changed or not.
}
Note that if the contents of DDM2000-030215.txt are to be interpreted as fixed strings rather than regexes, you should use index($0, a[i]) instead of $0 ~ a[i] in the check.
I'm very new to perl. I get a very annoying problem trying to merge two variables.
When I do it on a simple script like:
$arg1 = "hello";
$arg2 = " world";
print $arg1.$arg2;
It seems to work fine.
But when I'm trying to make it a bit more complex (reading a file into array and then adding the variable), it seems to instead of adding the second variable, it's replacing the first characters of the first variable instead.
Here's the code:
#!/usr/bin/perl -w
use LWP::Simple;
use Parallel::ForkManager;
use vars qw( $PROG );
( $PROG = $0 ) =~ s/^.*[\/\\]//;
if ( #ARGV == 0 ) {
print "Usage: ./$PROG [TARGET] [THREADS] [LIST] [TIMEOUT]\n";
exit;
}
my $host = $ARGV[0];
my $threads = $ARGV[1];
my $weblist = $ARGV[2];
my $timeout = $ARGV[3];
my $pm = Parallel::ForkManager->new($threads);
alarm($timeout);
open(my $handle, "<", $weblist);
chomp(my #webservers = <$handle>);
close $handle;
repeat:
for $target (#webservers) {
my $pid = $pm->start and next;
print "$target$host\n";
get($target.$host);
$pm->finish;
}
$pm->wait_all_children;
goto repeat;
So if the text file (list) would look like this:
www.site.com?url=
www.site.net?url=
etc
And the host variable is domain.com.
So instead of having something like: www.site.com?url=domain.com, I keep having this: domain.comsite.com?url=.
It's replacing the first characters of the first variable with the second variable. I can't get my head around it.
Any kind of help would be appreciate as I'm sure I'm just missing a small thing that would make me feel stupid later.
Thanks ahead, have a good day!
Your input file probably contains the carriage return \r character. You can remove it by s/\r//g, or convert the input form MSWin to *nix by dos2unix or fromdos.
I have a file like this:
c
a
b<
d
f
I need to get the index no. of letter which has < as suffix in a bash script. I thought of reading the file into an array then matching it with the regex .<$. But how do I get the index no. of that element which matches this regex?
I need the index no. because I want to modify this file to get the letter which is pointed to, move the < to the next line, and if it is at the last line, shuffle the order of the lines and place < after the first line.
you need awk '/<$/ { print NR; }' <your-file>
Grep could be used also:
grep -n \< infile
Then:
grep -n \< infile|cut -d : -f 1
So I build the source file,
$ cat file
c
a
b<
d
f<
with below awk, it will move the < to next line, but if it is last line, < will be moved to fist line.
awk '{ if (/</) a[NR]
sub(/</,"")
b[NR]=$0 }
END{ for (i in a)
{ if (i==NR) { b[1]=b[1] "<" }
else{ b[i+1]=b[i+1] "<"}
}
for (i=1;i<=NR;i++) print b[i]
}' file
c<
a
b
d<
f