Awk command in Tcl foreach loop - loops

I need to pass a variable value to awk command used in tickle.
Goal : i want to extract fisrt field of file clk_gate_names one by one in loop.So this is small example shown the same concept have benn used in one script.
one testcase below:
set a 0
incr a
foreach i {0.. 11} { # i variable is used for another thing
set test [exec awk {NR==${a}{print $1}} clk_gate_names]
}
Getting below error:
awk: NR==${a}{print $1}
awk: ^ syntax error
Please throw some light here. Thanks in advance :)

The problem is that you're wanting to mix variables from Tcl and variables from Awk. That's a bit complicated, as they are necessarily separate processes.
One option is to use Tcl's format command to build the Awk script-let.
set a 0
incr a
foreach i {0.. 11} { # i variable is used for another thing
set awkcode [format {NR==%d {print $1}} $a]
set test [exec awk $awkcode clk_gate_names]
}
However, a better approach is to pass the variable in by the -v option. That has the advantage of being not easily confused by any weird quoting going on, and Tcl doesn't break words unexpectedly (except, in exec, when it looks like a file/pipe redirection; that's a wart).
set a 0
incr a
foreach i {0.. 11} { # i variable is used for another thing
set test [exec awk -v a=$a {NR==a {print $1}} clk_gate_names]
}
The third way is to use an environment variable. That has the advantage of not being susceptible to the problems with redirections (so you can pass in values containing > for example).
set a 0
incr a
foreach i {0.. 11} { # i variable is used for another thing
set env(A_FROM_TCL) $a
set test [exec awk {NR==$ENVIRON["A_FROM_TCL"] {print $1}} clk_gate_names]
}
Notes:
Remember with awk and Tcl, don't use single quotes. Tcl doesn't use the same syntax as bash. Put the script in braces always or you'll get very confused.
I assume you know about the syntax of foreach? For the purposes of our discussion here it's not important, but you'd probably be having problems from it if the code you posted is exactly what you are using…

try below logic -
awk -v a=1 'NR==a {print $1}' f
As per your code -
set a 0
incr a
foreach i {0.. 11} { # i variable is used for another thing
set test [exec awk -v a="$a" 'NR==a {print $1}' clk_gate_names]
}
let me know if it satisfy your req. and share the output(desired output/error)

awk -v a=${a} "NR==a{print \$1}" filename
Check :
http://phaseit.net/claird/comp.lang.tcl/fmm.html
AWK: who | awk '{ print $1 }' works from the command line, but my Tcl interpreter rejects exec who | awk '{ print $1 }' I often hear
that from people who haven't yet learned to ask tclsh to interpret
exec who | awk {{ print $1 }} Notice that the '-s aren't "about" awk;
they're just conventional quoting in the shells from which one most
often accesses awk. exec doesn't use the shells. {} perform an
analogous function of quoting for Tcl. More examples appear in this
Google thread. Alexandre Ferrieux gives a correct and concise
explanation in another thread. The problem at hand for him happened to
do with sed, but that's essentially inconsequential; the command-line
syntaxes of awk and sed exhibit identical symptoms. Briefly, as the
error messages which arise themselves say, the single quotes (') are
the problem. These are shell markup for strings which shall not be
substituted. /bin/sh and others remove this markup before calling awk.
In Tcl, in contrast, braces {} serve the same purpose. Braces have the
advantage that they can be nested. One conclusion: brace the braces
that awk expects: awk -F " " {{print $1}}. Note that, in this last, it
is not necessary to escape the dollar-sign, because the outer brace
protect it, too.
Much the same has been written several times by Richard Suchenwirth,
Mark Janssen, Kevin Kenny, and others; it's unlikely any particular
expression was invented in isolation.

Related

Capture Alias Command Names into Array. Then Execute each in a For Loop

Trying to dynamically capture a list of specific alias commands, and store for execution later in the script. I want to dynamically capture, since this script will run against multiple users on multiple servers with different commands. IE: server1 might have 'statcmd1', 2 and 3; while server2 will only have 'statcmd2'. There is more to it, but this is where I am stuck. I'm guessing something isnt right with how the array is being set? Maybe there is another way to do this.
Test:
#!/bin/bash
source $HOME/.bash_profile #Aliases are set in here
# Capture the set aliases into array. It will omit the actual command and only pick up the alias name.
ALL_STAT=($(alias | grep "control.sh" | awk '{print $2}'| awk -F '=' '{print $1}' | grep -F "stat"))
#Execute the status of all elements listed in the array
for i in ${ALL_STAT[#]}; do $i; done
Execution:
[user#server]$ ./test.sh
./test.sh: line 16: statcmd1: command not found
./test.sh: line 16: statcmd2: command not found
./test.sh: line 16: statcmd3: command not found
Execute Alias Commands Outside of Script Works:
[user#server]$ statcmd1
RUNNING
[user#server]$ statcmd2
RUNNING
[user#server]$ statcmd3
RUNNING
From the bash manpage:
Aliases are not expanded when the shell is not interactive, unless
the expand_aliases shell option is set using shopt (see the
description of shopt under SHELL BUILTIN COMMANDS below).
Execute
shopt -s expand_aliases
before executing the commands - afterwards, all aliases are also available in the script.
Furthermore, due to the fact that alias expansion takes place before variable expansion, the line has to be evaluated twice with the help of eval:
#!/bin/bash
source $HOME/.bash_profile #Aliases are set in here
# Capture the set aliases into array. It will omit the actual command and only pick up the alias name.
ALL_STAT=($(alias | grep "control.sh" | awk '{print $2}'| awk -F '=' '{print $1}' | grep -F "stat"))
shopt -s expand_aliases
#Execute the status of all elements listed in the array
for i in ${ALL_STAT[#]}
do
eval "$i"
done
According to Bash Reference Manual, alias expansion is performed prior to variable expansion. Therefore the expansion of $i is not even attempted to be expanded as an alias.
You can use functions instead. Command/function execution is performed after variable expansion. As a matter of fact, the manual also says:
For almost every purpose, shell functions are preferred over aliases.

Bash array indirection in a function [duplicate]

Bash script to create multiple arrays from csv with unknown columns.
I am trying to write a script to compare two csv files with similar columns. I need it to locate the matching column from the other csv and compare any differences. The kicker is I would like the script to be dynamic to allow any number of columns to be entered and it still be able to function. I thought I had a good plan to solve this but turns out I'm running into syntax errors. Here is a sample of a csv I need to compare.
IP address, Notes, Nmap-SSH, Nmap-SMTP, Nmap-HTTP, Nmap-HTTPS,
10.0.0.1, , open, closed, open, open,
10.0.0.2, , closed, open, closed, closed,
When I read the csv file I was planning to look for "IF column == open; then; populate this column's array with the IP address" This would have given me 4 lists in this scenario with the IPs that were listening on said port. I could then compare that to my security device configuration to make sure it was configured properly. Finally to the meat, here is what I thought would accomplish creating the arrays for me to search later. However I ran into a snag when I tried to use a variable inside an array name. Can my syntax be corrected or is there just a better way to do this sort of thing?
#!/bin/bash
#
#
# This script compares config_cleaned_<ip>.txt output against ext_web_env.csv and outputs the differences
#
#
# Read from ext_web_env.csv file and create Array
#
FILENAME=./tmp/ext_web_env.csv
#
index=0
#
while read line
do
# How many columns are in the .csv?
varEnvCol=$(echo $line | awk -F, '{print NF}')
echo "columns = $varEnvCol"
# While loop to create array for each column
while [ $varEnvCol != 2 ]
do
# Checks to see if port is open; if so then add IP address to array
varPortCon=$(echo $line | awk -F, -v i=$varEnvCol '{print $i}')
if [ $varPortCon = "open" ]
then
arr$varEnvCol[$index]="$(echo $line | awk -F, '{print $1}')"
# I get this error message "line29 : arr8[194]=10.0.0.194: command not found"
fi
echo "arrEnv$varEnvCol is: ${arr$varEnvCol[#]}"
# Another error but not as important since I am using this to debug "line31: arr$varEnvCol is: ${arr$varEnvCol[#]}: bad substitution"
varEnvCol=$(($varEnvCol - 1))
done
index=$(($index + 1 ))
done < $FILENAME
UPDATE
I also tried using the eval command since all the data will be populated by other scripts.
but am getting this error message:
./compare.sh: line 41: arr8[83]=10.0.0.83: command not found
Here is my new code for this example:
if [[ $varPortCon = *'open'* ]]
then
eval arr\$varEnvCol[$index]=$(echo $line | awk -F, '{print $1}')
fi
arr$varEnvCol[$index]="$(...)"
doesn't work the way you expect it to - you cannot assign to shell variables indirectly - via an expression that expands to the variable name - this way.
Your attempted workaround with eval is also flawed - see below.
tl;dr
If you use bash 4.3 or above:
declare -n targetArray="arr$varEnvCol"
targetArray[index]=$(echo $line | awk -F, '{print $1}')
bash 4.2 or earlier:
declare "arr$varEnvCol"[index]="$(echo $line | awk -F, '{print $1}')"
Caveat: This will work in your particular situation, but may fail subtly in others; read on for details, including a more robust, but cumbersome alternative based on read.
The eval-based solution mentioned by #shellter in a since-deleted comment is problematic not only for security reasons (as they mentioned), but also because it can get quite tricky with respect to quoting; for completeness, here's the eval-based solution:
eval "arr$varEnvCol[index]"='$(echo $line | awk -F, '\''{print $1}'\'')'
See below for an explanation.
Assign to a bash array variable indirectly:
bash 4.3+: use declare -n to effectively create an alias ('nameref') of another variable
This is by far the best option, if available:
declare -n targetArray="arr$varEnvCol"
targetArray[index]=$(echo $line | awk -F, '{print $1}')
declare -n effectively allows you to refer to a variable by another name (whether that variable is an array or not), and the name to create an alias for can be the result of an expression (an expanded string), as demonstrated.
bash 4.2-: there are several options, each with tradeoffs
NOTE: With non-array variables, the best approach is to use printf -v. Since this question is about array variables, this approach is not discussed further.
[most robust, but cumbersome]: use read:
IFS=$'\n' read -r -d '' "arr$varEnvCol"[index] <<<"$(echo $line | awk -F, '{print $1}')"
IFS=$'\n' ensures that that leading and trailing whitespace in each input line is left intact.
-r prevents interpretation of \ chars. in the input.
-d '' ensures that ALL input is captured, even multi-line.
Note, however, that any trailing \n chars. are stripped.
If you're only interested in the first line of input, omit -d ''
"arr$varEnvCol"[index] expands to the variable - array element, in this case - to assign to; note that referring to variable index inside an array subscript does NOT need the $ prefix, because subscripts are evaluated in arithmetic context, where the prefix is optional.
<<< - a so-called here-string - sends its argument to stdin, where read takes its input from.
[simplest, but may break]: use declare:
declare "arr$varEnvCol"[index]="$(echo $line | awk -F, '{print $1}')"
(This is slightly counter-intuitive, in that declare is meant to declare, not modify a variable, but it works in bash 3.x and 4.x, with the constraints noted below.)
Works fine OUTSIDE a FUNCTION - whether the array was explicitly declared with declare or not.
Caveat: INSIDE a function, only works with LOCAL variables - you cannot reference shell-global variables (variables declared outside the function) from inside a function that way. Attempting to do so invariably creates a LOCAL variable ECLIPSING the shell-global variable.
[insecure and tricky]: use eval:
eval "arr$varEnvCol[index]"='$(echo $line | awk -F, '\''{print $1}'\'')'
CAVEAT: Only use eval if you fully control the contents of the string being evaluated; eval will execute any command contained in a string, with potentially unwanted results.
Understanding what variable references/command substitutions get expanded when is nontrivial - the safest approach is to delay expansion so that they happen when eval executes rather than immediate expansion that happens when arguments are passed to eval.
For a variable assignment statement to succeed, the RHS (right-hand side) must eventually evaluate to a single token - either unquoted without whitespace or quoted (optionally with whitespace).
The above example uses single quotes to delay expansion; thus, the string passed mustn't contain single quotes directly and thus is broken into multiple parts with literal ' chars. spliced in as \'.
Also note that the LHS (left-hand side) of the assignment statement passed to eval must be a double-quoted string - using an unquoted string with selective quoting of $ won't work, curiously:
OK: eval "arr$varEnvCol[index]"=...
FAILS: eval arr\$varEnvCol[index]=...

shell script string substitution [duplicate]

I want to pipe the output of a "template" file into MySQL, the file having variables like ${dbName} interspersed. What is the command line utility to replace these instances and dump the output to standard output?
The input file is considered to be safe, but faulty substitution definitions could exist. Performing the replacement should avoid performing unintended code execution.
Update
Here is a solution from yottatsa on a similar question that only does replacement for variables like $VAR or ${VAR}, and is a brief one-liner
i=32 word=foo envsubst < template.txt
Of course if i and word are in your environment, then it is just
envsubst < template.txt
On my Mac it looks like it was installed as part of gettext and from MacGPG2
Old Answer
Here is an improvement to the solution from mogsie on a similar question, my solution does not require you to escale double quotes, mogsie's does, but his is a one liner!
eval "cat <<EOF
$(<template.txt)
EOF
" 2> /dev/null
The power on these two solutions is that you only get a few types of shell expansions that don't occur normally $((...)), `...`, and $(...), though backslash is an escape character here, but you don't have to worry that the parsing has a bug, and it does multiple lines just fine.
Sed!
Given template.txt:
The number is ${i}
The word is ${word}
we just have to say:
sed -e "s/\${i}/1/" -e "s/\${word}/dog/" template.txt
Thanks to Jonathan Leffler for the tip to pass multiple -e arguments to the same sed invocation.
Use /bin/sh. Create a small shell script that sets the variables, and then parse the template using the shell itself. Like so (edit to handle newlines correctly):
File template.txt:
the number is ${i}
the word is ${word}
File script.sh:
#!/bin/sh
#Set variables
i=1
word="dog"
#Read in template one line at the time, and replace variables (more
#natural (and efficient) way, thanks to Jonathan Leffler).
while read line
do
eval echo "$line"
done < "./template.txt"
Output:
#sh script.sh
the number is 1
the word is dog
I was thinking about this again, given the recent interest, and I think that the tool that I was originally thinking of was m4, the macro processor for autotools. So instead of the variable I originally specified, you'd use:
$echo 'I am a DBNAME' | m4 -DDBNAME="database name"
Create rendertemplate.sh:
#!/usr/bin/env bash
eval "echo \"$(cat $1)\""
And template.tmpl:
Hello, ${WORLD}
Goodbye, ${CHEESE}
Render the template:
$ export WORLD=Foo
$ CHEESE=Bar ./rendertemplate.sh template.tmpl
Hello, Foo
Goodbye, Bar
template.txt
Variable 1 value: ${var1}
Variable 2 value: ${var2}
data.sh
#!/usr/bin/env bash
declare var1="value 1"
declare var2="value 2"
parser.sh
#!/usr/bin/env bash
# args
declare file_data=$1
declare file_input=$2
declare file_output=$3
source $file_data
eval "echo \"$(< $file_input)\"" > $file_output
./parser.sh data.sh template.txt parsed_file.txt
parsed_file.txt
Variable 1 value: value 1
Variable 2 value: value 2
Here's a robust Bash function that - despite using eval - should be safe to use.
All ${varName} variable references in the input text are expanded based on the calling shell's variables.
Nothing else is expanded: neither variable references whose names are not enclosed in {...} (such as $varName), nor command substitutions ($(...) and legacy syntax `...`), nor arithmetic substitutions ($((...)) and legacy syntax $[...]).
To treat a $ as a literal, \-escape it; e.g.:\${HOME}
Note that input is only accepted via stdin.
Example:
$ expandVarsStrict <<<'$HOME is "${HOME}"; `date` and \$(ls)' # only ${HOME} is expanded
$HOME is "/Users/jdoe"; `date` and $(ls)
Function source code:
expandVarsStrict(){
local line lineEscaped
while IFS= read -r line || [[ -n $line ]]; do # the `||` clause ensures that the last line is read even if it doesn't end with \n
# Escape ALL chars. that could trigger an expansion..
IFS= read -r -d '' lineEscaped < <(printf %s "$line" | tr '`([$' '\1\2\3\4')
# ... then selectively reenable ${ references
lineEscaped=${lineEscaped//$'\4'{/\${}
# Finally, escape embedded double quotes to preserve them.
lineEscaped=${lineEscaped//\"/\\\"}
eval "printf '%s\n' \"$lineEscaped\"" | tr '\1\2\3\4' '`([$'
done
}
The function assumes that no 0x1, 0x2, 0x3, and 0x4 control characters are present in the input, because those chars. are used internally - since the function processes text, that should be a safe assumption.
here's my solution with perl based on former answer, replaces environment variables:
perl -p -e 's/\$\{(\w+)\}/(exists $ENV{$1}?$ENV{$1}:"missing variable $1")/eg' < infile > outfile
I would suggest using something like Sigil:
https://github.com/gliderlabs/sigil
It is compiled to a single binary, so it's extremely easy to install on systems.
Then you can do a simple one-liner like the following:
cat my-file.conf.template | sigil -p $(env) > my-file.conf
This is much safer than eval and easier then using regex or sed
Here is a way to get the shell to do the substitution for you, as if the contents of the file were instead typed between double quotes.
Using the example of template.txt with contents:
The number is ${i}
The word is ${word}
The following line will cause the shell to interpolate the contents of template.txt and write the result to standard out.
i='1' word='dog' sh -c 'echo "'"$(cat template.txt)"'"'
Explanation:
i and word are passed as environment variables scopped to the execution of sh.
sh executes the contents of the string it is passed.
Strings written next to one another become one string, that string is:
'echo "' + "$(cat template.txt)" + '"'
Since the substitution is between ", "$(cat template.txt)" becomes the output of cat template.txt.
So the command executed by sh -c becomes:
echo "The number is ${i}\nThe word is ${word}",
where i and word are the specified environment variables.
If you are open to using Perl, that would be my suggestion. Although there are probably some sed and/or AWK experts that probably know how to do this much easier. If you have a more complex mapping with more than just dbName for your replacements you could extend this pretty easily, but you might just as well put it into a standard Perl script at that point.
perl -p -e 's/\$\{dbName\}/testdb/s' yourfile | mysql
A short Perl script to do something slightly more complicated (handle multiple keys):
#!/usr/bin/env perl
my %replace = ( 'dbName' => 'testdb', 'somethingElse' => 'fooBar' );
undef $/;
my $buf = <STDIN>;
$buf =~ s/\$\{$_\}/$replace{$_}/g for keys %replace;
print $buf;
If you name the above script as replace-script, it could then be used as follows:
replace-script < yourfile | mysql
file.tpl:
The following bash function should only replace ${var1} syntax and ignore
other shell special chars such as `backticks` or $var2 or "double quotes".
If I have missed anything - let me know.
script.sh:
template(){
# usage: template file.tpl
while read -r line ; do
line=${line//\"/\\\"}
line=${line//\`/\\\`}
line=${line//\$/\\\$}
line=${line//\\\${/\${}
eval "echo \"$line\"";
done < ${1}
}
var1="*replaced*"
var2="*not replaced*"
template file.tpl > result.txt
I found this thread while wondering the same thing. It inspired me to this (careful with the backticks)
$ echo $MYTEST
pass!
$ cat FILE
hello $MYTEST world
$ eval echo `cat FILE`
hello pass! world
Lots of choices here, but figured I'd toss mine on the heap. It is perl based, only targets variables of the form ${...}, takes the file to process as an argument and outputs the converted file on stdout:
use Env;
Env::import();
while(<>) { $_ =~ s/(\${\w+})/$1/eeg; $text .= $_; }
print "$text";
Of course I'm not really a perl person, so there could easily be a fatal flaw (works for me though).
It can be done in bash itself if you have control of the configuration file format. You just need to source (".") the configuration file rather than subshell it. That ensures the variables are created in the context of the current shell (and continue to exist) rather than the subshell (where the variable disappear when the subshell exits).
$ cat config.data
export parm_jdbc=jdbc:db2://box7.co.uk:5000/INSTA
export parm_user=pax
export parm_pwd=never_you_mind
$ cat go.bash
. config.data
echo "JDBC string is " $parm_jdbc
echo "Username is " $parm_user
echo "Password is " $parm_pwd
$ bash go.bash
JDBC string is jdbc:db2://box7.co.uk:5000/INSTA
Username is pax
Password is never_you_mind
If your config file cannot be a shell script, you can just 'compile' it before executing thus (the compilation depends on your input format).
$ cat config.data
parm_jdbc=jdbc:db2://box7.co.uk:5000/INSTA # JDBC URL
parm_user=pax # user name
parm_pwd=never_you_mind # password
$ cat go.bash
cat config.data
| sed 's/#.*$//'
| sed 's/[ \t]*$//'
| sed 's/^[ \t]*//'
| grep -v '^$'
| sed 's/^/export '
>config.data-compiled
. config.data-compiled
echo "JDBC string is " $parm_jdbc
echo "Username is " $parm_user
echo "Password is " $parm_pwd
$ bash go.bash
JDBC string is jdbc:db2://box7.co.uk:5000/INSTA
Username is pax
Password is never_you_mind
In your specific case, you could use something like:
$ cat config.data
export p_p1=val1
export p_p2=val2
$ cat go.bash
. ./config.data
echo "select * from dbtable where p1 = '$p_p1' and p2 like '$p_p2%' order by p1"
$ bash go.bash
select * from dbtable where p1 = 'val1' and p2 like 'val2%' order by p1
Then pipe the output of go.bash into MySQL and voila, hopefully you won't destroy your database :-).
In place perl editing of potentially multiple files, with backups.
perl -e 's/\$\{([^}]+)\}/defined $ENV{$1} ? $ENV{$1} : ""/eg' \
-i.orig \
-p config/test/*
I created a shell templating script named shtpl. My shtpl uses a jinja-like syntax which, now that I use ansible a lot, I'm pretty familiar with:
$ cat /tmp/test
{{ aux=4 }}
{{ myarray=( a b c d ) }}
{{ A_RANDOM=$RANDOM }}
$A_RANDOM
{% if $(( $A_RANDOM%2 )) == 0 %}
$A_RANDOM is even
{% else %}
$A_RANDOM is odd
{% endif %}
{% if $(( $A_RANDOM%2 )) == 0 %}
{% for n in 1 2 3 $aux %}
\$myarray[$((n-1))]: ${myarray[$((n-1))]}
/etc/passwd field #$n: $(grep $USER /etc/passwd | cut -d: -f$n)
{% endfor %}
{% else %}
{% for n in {1..4} %}
\$myarray[$((n-1))]: ${myarray[$((n-1))]}
/etc/group field #$n: $(grep ^$USER /etc/group | cut -d: -f$n)
{% endfor %}
{% endif %}
$ ./shtpl < /tmp/test
6535
6535 is odd
$myarray[0]: a
/etc/group field #1: myusername
$myarray[1]: b
/etc/group field #2: x
$myarray[2]: c
/etc/group field #3: 1001
$myarray[3]: d
/etc/group field #4:
More info on my github
To me this is the easiest and most powerful solution, you can even include other templates using the same command eval echo "$(<template.txt):
Example with nested template
create the template files, the variables are in regular bash syntax ${VARIABLE_NAME} or $VARIABLE_NAME
you have to escape special characters with \ in your templates otherwhise they will be interpreted by eval.
template.txt
Hello ${name}!
eval echo $(<nested-template.txt)
nested-template.txt
Nice to have you here ${name} :\)
create source file
template.source
declare name=royman
parse the template
source template.source && eval echo "$(<template.txt)"
the output
Hello royman!
Nice to have you here royman :)
envsubst
please don't use anything else (ie. don't eval)

Run a bash array with pipes

How can I run a command line from a bash array containing a pipeline?
For example, I want run ls | grep x by means of:
$ declare -a pipeline
$ pipeline=(ls)
$ pipeline+=("|")
$ pipeline+=(grep x)
$ "${pipeline[#]}"
But I get this:
ls: cannot access |: No such file or directory
ls: cannot access grep: No such file or directory
ls: cannot access x: No such file or directory
Short form: You can't (without writing some code), and it's a feature, not a bug.
If you're doing things in a safe way, you're protecting your data from being parsed as code (syntax). What you explicitly want here, however, is to treat data as code, but only in a controlled way.
What you can do is iterate over elements, use printf '%q ' "$element" to get a safely quoted string if they aren't a pipeline, and leave them unsubstituted if they are.
After doing that, and ONLY after doing that, can you safely eval the output string.
eval_args() {
local outstr=''
while (( $# )); do
if [[ $1 = '|' ]]; then
outstr+="| "
else
printf -v outstr '%s%q ' "$outstr" "$1"
fi
shift
done
eval "$outstr"
}
eval_args "${pipeline[#]}"
By the way -- it's much safer NOT TO DO THIS. Think about the case where you're processing a list of files, and one of them is named |; this strategy could be used by an attacker to inject code. Using separate lists for the before and after arrays, or making only one side of the pipeline an array and hardcoding the other, is far better practice.
Close - just add eval:
$ eval ${pipeline[#]}
This works for me:
bash -c "${pipeline[*]}"

Does it possible combine bash and awk script files?

I have some bash script where I get values of variable, that I would like use in awk.
Does it possible include whole awk (like it possible with bash script files) file in bash e.g.:
#!/bin/sh
var1=$1
source myawk.sh
and myawk.sh:
print $1;
Bash and awk are different languages, each with their own interpreter of the same name. The tiny sample you show is stripped down too far to make much sense:
You've marked both files as shell scripts; one using the shebang #!/bin/sh and the other using the extension .sh. Obviously the shell can read shell script, and the command to do so is called . in Bourne shell (or source in csh and bash).
The shell script assigns a variable, but you're not using it anywhere. Did you mean passing it on to the awk script?
Both the awk and shell script use $1, which has different meanings for them (in bash, it's from the command line or a set command; in awk, it's from a parsed input line).
The two tools are often used in tandem, as the shell is better at combining separate programs and awk is better at reformatting tabular or structured text. It was so common that a whole language evolved to combine the tasks; Perl's roots are as a combination of shell, awk and sed.
If you just wanted to pass a variable from the shell script into an awk script, use -v. The man page is your friend.
first of all, if you're writing bash don't use #!/bin/sh that will put you in compatibility mode which is only necessarly if you're writing for portability (and then you have to adhere to the POSIX normative).
now regarding your question you just have to run awk from inside your bash script, like this:
#!/bin/bash
var1=$1
awk -f myawk.sh
also you should use .awk as extension I guess.
Or, many ppl do sth like this:
#!/bin/env bash
#Bash things start
...
var1=$1
#Bash things stop
#Awk things start,
#may use pipes or variable to interact with bash
awk -v V1=var1 '
#AWK program, can even include awk scripts here.
'
#Bash things
I suggest this page here by Bruce Barnett:
http://www.grymoire.com/Unix/Awk.html#uh-3
You can also use double quote to make use of shell's extract feature but it is confusing.
Personally I just try to avoid those fancy gnu additions of bash or awk and make my scripts ksh+(n)awk compatible.
As an hardcore AWK user, I soon realized that doing the following was really a huge help :
Defining and exporting an AWK_REPO variable in my bashrc
#Content of bashrc
export AWK_REPO=~/bin/AWK
Storing there every AWK script I write using the .awk extension.
You can then call it from anywhere like this :
awk -f $AWK_REPO/myScript.awk $file
or even, using Shebangs and adding AWK_REPO to PATH (with export PATH=${AWK_REPO}:${PATH})
myScript.awk $file

Resources