I want to print a line if a statement is true, but I want to loop through several variations. This is works as a batch script:
Awk "($9==1) {print $0}" file > out.txt & type out.txt >> all_out.txt
Awk "($9==3) {print $0}" file > out.txt & type out.txt >> all_out.txt
Awk "($9==5) {print $0}" file > out.txt & type out.txt >> all_out.txt
But not very elegant. Can this be written into a for loop or if statement? I'm using gnuawk on windows with cygwin installed.
Any help would be greatly appreciated.
try:
awk '($9==1) || ($9==3) || ($9==5)' Input_file > all_out.txt
OR
awk '($9%2 != 0)' Input_file > all_out.txt
Considering that you want to take all conditions output into a single file here, let me know if this is not the case.
Try this -
$ awk '($9 ~ /^1$|^3$|^5$/) {print $0}' file
Related
I started now with the bash script and I'm trying to make a for loop that uses an array where there are the arguments passed to the script (2 or more files) trying to "hang" with the ">>" command, the first ones "n-1" files passed as argument to the last file (n) , writing the input files in the order from right to left
for example :
myscript.sh file1 file2 file3 file4
file4 will contain in sequence file3 file2 file1
arr=($#)
j=$#-1
for i in { j-1..0..1 }
do
cat arr[i] >> $[j]
done
I tried to do it this way but it doesn't work, can someone help me?
Try:
#!/bin/bash
n=$#
for i in $(seq $((n - 1)) -1 1) ; do
cat ${#:$i:1} >> ${#:$n:1}
done
Explanation:
seq $((n - 1)) -1 1 generates the numbers from $n - 1 to 1 in reverse order.
${#:$i:1} gives the element $i of the array of the arguments of the script.
Or, without seq:
#!/bin/bash
n=$#
for ((i = n - 1; i; i--)) ; do
echo "cat ${#:$i:1} >> ${#:$n:1}"
done
Or even shorter, with a while loop instead of a for loop, and maybe off topic for this reason:
#!/bin/bash
i=${#}
while ((--i)) ; do
echo "cat ${#:$i:1} >> ${#:$#:1}"
done
Not sure what about "hang", but this concatenates the first files to the last filename
cat "${#:0:$#}" > "${#: -1:1}"
this is slicing the the arguments.
You could print all arguments except the last one on separate lines, reverse the files order and then cat all of them into the file that is the last argument:
printf "%s\n" "${#:1:$#-1}" | tac | xargs cat > "${#: -1}"
The same using zero as the stream separator:
printf "%s\0" "${#:1:$#-1}" | tac -s '' | xargs -0 cat > "${#: -1}"
It would be very advisable to protect against the count of arguments less then 2, because when $# = 0 then ${#: -1} will be $0. Then $0 could be your shell with ex full path like /bin/bash, so > could overwrite your shell executable file!
So do:
if (($# >= 2)); then printf "%s\0" "${#:1:$#-1}" | tac -s '' | xargs -0 cat > "${#: -1}"; fi
I am using the following command to split one file into two.
currentCouponFile="$DIR/kozo_agency_current_coupon_ts-"$HMD_EFF_DT".csv"
agencySpreadFile="$DIR/kozo_agency_spread_ts-"$HMD_EFF_DT".csv"
awk -F '|' 'BEGIN{OFS=","};{$1=$1; if(NF == 2){print > "$currentCouponFile"}else{ print > "$agencySpreadFile"}}' $fileName
Doing echo on currentCouponFile and agencySpreadFile gives me the desired filename.
However the resultant file get created as $agencySpreadFile and $currentCouponFile. Not the in .csv format.
Any solution ?
Do not use shell variables inside awk.
currentCouponFile="$DIR/kozo_agency_current_coupon_ts-"$HMD_EFF_DT".csv"
agencySpreadFile="$DIR/kozo_agency_spread_ts-"$HMD_EFF_DT".csv"
awk -F '|' -v current="$currentCouponFile" -v agency="$agencySpreadFile" '
BEGIN{OFS=","}
{
$1=$1;
if(NF == 2)
{print > current}
else
{print > agency}
}' $fileName
i have an awk command as below :
awk 'FNR==NR{a[$1$2]=$3;next} ($1$2 in a) {print$1,$2,a[$1$2],$3}' ""each line of file 1"" >./awkfile/12as_132l.txt
and f1.txt content is :
1sas.txt 12ds.txt
13sa.txt 21sa.txt
i want that my script read each line of fil1.txt and put the contents in this awk command...instead of """each line of file1"".. and execute the command like below:
awk 'FNR==NR{a[$1$2]=$3;next} ($1$2 in a) {print$1,$2,a[$1$2],$3}' 1sas.txt 12ds.txt >./awkfile/12as_132l.txt
awk 'FNR==NR{a[$1$2]=$3;next} ($1$2 in a) {print$1,$2,a[$1$2],$3}' 13sa.txt 21sa.txt >./awkfile/12as_132l.txt
i need a loop but the problem is that's a little strange for me .
Here is Snippet, which reads 2 fields line by line from f1.txt, put it into variable, and used in awk
#!/usr/bin/env bash
while read -r col1file col2file; do
# your command goes here
# this awk does nothing, replace it with your command
awk '{ }' "$col1file" "$col2file" > someoutfile
done < "f1.txt"
Test Results:
akshay#db-3325:/tmp$ cat f1.txt
1sas.txt 12ds.txt
13sa.txt 21sa.txt
akshay#db-3325:/tmp$ cat test.sh
while read -r col1file col2file; do
echo "col1 : $col1file"
echo "col2 : $col2file"
# your command goes here
# this awk does nothing, replace it with your command
# awk '{ }' "$col1file" "$col2file" > someoutfile
done < "f1.txt"
akshay#db-3325:/tmp$ bash test.sh
col1 : 1sas.txt
col2 : 12ds.txt
col1 : 13sa.txt
col2 : 21sa.txt
change the IFS to IFS=' ' and use loop .
I am trying to create a new file (say file3) by stacking two files (file1 and file2) together. I need the entirety of file1 but want to exclude the first row of file2. What I have now is a two step process using head and sed.
cat file1.csv file2.csv > file3.csv
sed -i 1001d file3.csv
The last step requires me to find out the length of file1 before removing the line after so that it corresponds to the first line of file2. How do I combine these two lines into a single line of code? I tried this and it failed:
cat file1.csv sed 1d file2.csv > file3.csv
You can use a compound statement like this:
{ cat file1.csv; sed '1d' file2.csv; } > file3.csv
You can use process substitution, like:
cat file1.csv <(tail -n +2 file2.csv) > file3.csv
One way is cat file1.csv > file3.csv; tail -n +2 file2.csv >> file3.csv.
Another one is tail -n +2 file2.csv | cat file1.csv - > file3.csv
You can use your second try, but with process substitution:
cat file1.csv <(sed '1d' file2.csv) > file3.csv
Your command failed because sed was treated as filename that cat tried to access.
If your shell doesn't support process substitution, you can use this instead:
sed '1d' file2.csv | cat file1.csv - > file3.csv
Alternatively, you can use awk:
awk 'FNR!=NR && FNR==1 {next} 1' file{1,2}.csv > file3.csv
FNR!=NR is true if we're not in the first file (file record number is not equal to overall record number), and FNR==1 is true on the first line of each file. Together, the condition is true on the first line of each but the first file; next skips that line. 1 gets all other lines printed.
I have awk file:
#!/bin/awk -f
BEGIN {
}
{
filetime[$'$colnumber']++;
}
END {
for (i in filetime) {
print filetime[i],i;
}
}
And bash script:
#!/bin/bash
var1=$1
awk -f myawk.awk
When I run:
ls -la | ./countPar.sh 5
I receive error:
ls -la | ./countPar.sh 5
awk: myawk.awk:6: filetime[$'$colnumber']++;
awk: myawk.awk:6: ^ invalid char ''' in expression
Why? $colnumber must be replaced with 5, so awk should read 5th column of ls ouput.
Thanks.
You can pass variables to your awk script directly from the command line.
Change this line:
filetime[$'$colnumber']++;
To:
filetime[colnumber]++;
And run:
ls -al | awk -f ./myawk.awk -v colnumber=5
If you really want to use a bash wrapper:
#!/bin/bash
var1=$1
awk -f myawk.awk colnumber=$var1
(with the same change in your script as above.)
If you want to use environment variables use:
#!/bin/bash
export var1=$1
awk -f myawk.awk
and:
filetime[ENVIRON["var1"]]++;
(I really don't understand what the purpose of your awk script is though. The last part could be simplified to:
END { print filetime[colnumber],colnumber; }
and parsing the output of ls is generally a bad idea.)
The easiest way to do it:
#!/bin/bash
var=$1
awk -v colnumber="${var}" -f /your/script
But within your awk script, you don't need the $ in front of colnumber.
HTH
Passing 3 variable to script myscript.sh
var1 is the column number on which condition has set.
While var2 & var3 are input and temp file.
#!/bin/ksh
var1=$1
var2=$2
var3=$3
awk -v col="${var1}" -f awkscript.awk ${var2} > $var3
mv ${var3} ${var2}
execute it like below -
./myscript.sh 2 file.txt temp.txt