Get exit code for eval command in sh - eval

I'm trying to create a function in a shell script that takes a command and executes it using eval, and then does some post-processing based on the success of the command. Unfortunately the code is not behaving as I would expect. Here's what I have:
#!/bin/sh
...
function run_cmd()
{
# $1 = build cmd
typeset cmd="$1"
typeset ret_code
eval $cmd
ret_code=$?
if [ $ret_code == 0 ]
then
# Process Success
else
# Process Failure
fi
}
run_cmd "xcodebuild -target \"blah\" -configuration Debug"
When the command ($cmd) succeeds, it works fine. When the command fails ( compilation error, for instance ), the script automatically exits before I can process the failure. Is there a way I can prevent eval from exiting, or is there a different approach I can take that will allow me achieve my desired behavior?

The script should only exit if you have set -e somewhere in the script, so I'll assume that is the case. A simpler way to write the function which will prevent set -e from triggering an automatic exit is to do:
run_cmd() {
if eval "$#"; then
# Process Success
else
# Process Failure
fi
}
Note that function is non-portable when defining the function, and redundant if () are also used.

Related

How to write bash function to print and run command when the command has arguments with spaces or things to be expanded

In Bash scripts, I frequently find this pattern useful, where I first print the command I'm about to execute, then I execute the command:
echo 'Running this cmd: ls -1 "$HOME/temp/some folder with spaces'
ls -1 "$HOME/temp/some folder with spaces"
echo 'Running this cmd: df -h'
df -h
# etc.
Notice the single quotes in the echo command to prevent variable expansion there! The idea is that I want to print the cmd I'm running, exactly as I will type and run the command, then run it!
How do I wrap this up into a function?
Wrapping the command up into a standard bash array, and then printing and calling it, like this, sort-of works:
# Print and run the passed-in command
# USAGE:
# cmd_array=(ls -a -l -F /)
# print_and_run_cmd cmd_array
# See:
# 1. My answer on how to pass regular "indexed" and associative arrays by reference:
# https://stackoverflow.com/a/71060036/4561887 and
# 1. My answer on how to pass associative arrays: https://stackoverflow.com/a/71060913/4561887
print_and_run_cmd() {
local -n array_reference="$1"
echo "Running cmd: ${cmd_array[#]}"
# run the command by calling all elements of the command array at once
${cmd_array[#]}
}
For simple commands like this it works fine:
Usage:
cmd_array=(ls -a -l -F /)
print_and_run_cmd cmd_array
Output:
Running cmd: ls -a -l -F /
(all output of that cmd is here)
But for more-complicated commands it is broken!:
Usage:
cmd_array=(ls -1 "$HOME/temp/some folder with spaces")
print_and_run_cmd cmd_array
Desired output:
Running cmd: ls -1 "$HOME/temp/some folder with spaces"
(all output of that command should be here)
Actual Output:
Running cmd: ls -1 /home/gabriel/temp/some folder with spaces
ls: cannot access '/home/gabriel/temp/some': No such file or directory
ls: cannot access 'folder': No such file or directory
ls: cannot access 'with': No such file or directory
ls: cannot access 'spaces': No such file or directory
The first problem, as you can see, is that $HOME got expanded in the Running cmd: line, when it shouldn't have, and the double quotes around that path argument were removed, and the 2nd problem is that the command doesn't actually run.
How do I fix these 2 problems?
References:
my bash demo program where I have this print_and_run_cmd function: https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/bash/argument_parsing__3_advanced__gen_prog_template.sh
where I first documented how to pass bash arrays by reference, as I do in that function:
Passing arrays as parameters in bash
How to pass an associative array as argument to a function in Bash?
Follow-up question:
Bash: how to print and run a cmd array which has the pipe operator, |, in it
If you've got Bash version 4.4 or later, this function may do what you want:
function print_and_run_cmd
{
local PS4='Running cmd: '
local -
set -o xtrace
"$#"
}
For example, running
print_and_run_cmd echo 'Hello World!'
outputs
Running cmd: echo 'Hello World!'
Hello World!
local PS4='Running cmd: ' sets a prefix for commands printed by the shell when the xtrace option is on. The default is + . Localizing it means that the previous value of PS4 is automatically restored when the function returns.
local - causes any changes to shell options to be reverted automatically when the function returns. In particular, it causes the set -o xtrace on the next line to be automatically undone when the function returns. Support for local - was added in Bash 4.4.
From man bash, under the local [option] [name[=value] ... | - ] section (emphasis added):
If name is -, the set of shell options is made local to the function in which local is invoked: shell options changed using the set builtin inside the function are restored to their original values when the function returns.
set -o xtrace (which is equivalent to set -x) causes the shell to print commands, preceded by the expanded value of PS4, before running them.
See help set.
Check your scripts with shellcheck:
Line 2:
local -n array_reference="$1"
^-- SC2034 (warning): array_reference appears unused. Verify use (or export if used externally).
Line 3:
echo "Running cmd: ${cmd_array[#]}"
^-- SC2145 (error): Argument mixes string and array. Use * or separate argument.
^-- SC2154 (warning): cmd_array is referenced but not assigned.
Line 5:
${cmd_array[#]}
^-- SC2068 (error): Double quote array expansions to avoid re-splitting elements.
You might want to research https://github.com/koalaman/shellcheck/wiki/SC2068 . We fix all errors and we get:
print_and_run_cmd() {
local -n array_reference="$1"
echo "Running cmd: ${array_reference[*]}"
# run the command by calling all elements of the command array at once
"${array_reference[#]}"
}
For me it's odd to pass an array by reference in this case. I would pass the actual values. I often do:
prun() {
# in the style of set -x
# print to stderr, so output can be captured
echo "+ $*" >&2
# or echo "+ ${*#Q}" >&2
# or echo "+$(printf " %q" "$#")" >&2
# or echo "+$(/bin/printf " %q" "$#")" >&2
"$#"
}
prun "${cmd_array[#]}"
How do I fix these 2 problems?
Incorporate into your workflow linters, formatters and static analysis tools, like shellcheck, and check the problems they point out.
And quote variable expansion. It's "${array[#]}".
You can achieve what you want with DEBUG trap :
#!/bin/bash
set -T
trap 'test "$FUNCNAME" = print_and_run_cmd || trap_saved_command="${BASH_COMMAND}"' DEBUG
print_and_run_cmd(){
echo "Running this cmd: ${trap_saved_command#* }"
"$#"
}
outer(){
print_and_run_cmd ls -1 "$HOME/temp/some folder with spaces"
}
outer
# output ->
# Running this cmd: ls -1 "$HOME/temp/some folder with spaces"
# ...
I really like #pjh's answer, so I've marked it as correct. It doesn't fully answer my original question though, so if another answer comes along that does, I may have to change that. Anyway, see #pjh's answer or a full explanation of how the below code works, and what all those lines mean. I've helped edit that answer with some of the sources from man bash and help set.
I'd like to change the formatting and provide some more examples, however, to show that variable expansion does take place within the command. I'd also like to provide one version which passes by reference, and one which does not, so you can choose the call style which you like best.
Here are my examples, showing both call styles (print_and_run1 cmd_array and print_and_run2 "${cmd_array[#]}"):
#!/usr/bin/env bash
# Print and run the passed-in command, which is passed in as an
# array **by reference**.
# See here for a full explanation: https://stackoverflow.com/a/71151669/4561887
# USAGE:
# cmd_array=(ls -a -l -F /)
# print_and_run1 cmd_array
print_and_run1() {
local -n array_reference="$1"
local PS4='Running cmd: '
local -
set -o xtrace
# Call the cmd
"${array_reference[#]}"
}
# Print and run the passed-in command, which is passed in as members
# of an array **by value**.
# See here for a full explanation: https://stackoverflow.com/a/71151669/4561887
# USAGE:
# cmd_array=(ls -a -l -F /)
# print_and_run2 "${cmd_array[#]}"
print_and_run2() {
local PS4='Running cmd: '
local -
set -o xtrace
# Call the cmd
"$#"
}
cmd_array=(ls -1 "$HOME/temp/some folder with spaces")
print_and_run1 cmd_array
echo ""
print_and_run2 "${cmd_array[#]}"
echo ""
Sample run and output:
eRCaGuy_hello_world/bash$ ./print_and_run.sh
Running cmd: ls -1 '/home/gabriel/temp/some folder with spaces'
file1.txt
file2.txt
Running cmd: ls -1 '/home/gabriel/temp/some folder with spaces'
file1.txt
file2.txt
This seems to work too:
print_and_run_cmd() {
echo "Running cmd: $1"
eval "$cmd"
}
cmd='ls -1 "$HOME/temp/some folder with spaces"'
print_and_run_cmd "$cmd"
Output:
Running cmd: ls -1 "$HOME/temp/some folder with spaces"
(result of running the cmd is here)
But now the problem is, if I want to print an expanded version of the cmd too, to verify that part worked properly, I can't, or at least, don't know how.

appending return status to associative array in bash

How to append return status of a command and loop through the associative array and print the results. It seems to be simple and working with sample code but not working in a function.
I tried quoting the keys and values to push to array but values are not printed. I did checked the return status after appending to array and it is success. However, it is not having the values in the associative array.
The array is not getting the values assigned correctly and hence values are not printed as expected.
The same code is working in my bash version.
#!/bin/bash
declare -A combo
combo+=(['foo']='bar')
combo+=(['hello']='world')
for window in "${!combo[#]}"
do
echo "The key: ${window}" # foo
echo "The value: ${combo[${window}]}" # bar
done
How to capture the return status of 0 when maven build succeeded and 1 when failure and assign to associative array and print the results from associative array after build ran for all the repositories.
#!/bin/bash
declare -A GITARRAY=(
[git_token]=user-devops:ggc4ktalwfbf5jiqdsdhmmgj2jvvhj3ltfzdujxzxnmhj45qk525kq
[git_branch]='development'
)
declare -A GITREPOS=(
[repository1]=org.gitrepo.com/LIBS/_git/repository1
[repository2]=org.gitrepo.com/LIBS/_git/repository2
)
declare -A BUILDSTATUS
target_dir=$HOME/womsrc
BASEDIR=`dirname "$(readlink -f "$0")"`
HELLOTO=`whoami`
maven_goal=compile
#repositories modifed to dummy values
REPOS=(reposiroty1 repository2)
main() {
echo "In main fuction: maven goal: $maven_goal"
if [ -n "$maven_goal" ] ; then
for REPO in "${REPOS[#]}"
do
if [ -d "$target_dir/$REPO/.git" ] ; then
echo "[INFO] invoking compilation for service in $REPO ..."
service_dir="$target_dir/$REPO/$REPO"
build_status=$(buildService $REPO $maven_goal $service_dir)
echo "[STATUS] mvn:$maven_goal $REPO: $build_status"
fi
done
fi ## mavengoal end here
echo "printing build status"
for bt in "${!BUILDSTATUS[#]}"
do
echo "key: ${bt} result: ${BUILDSTATUS[${bt}]}"
done
}
buildService() {
servicename="$1"
mavengoal="$2"
servicedir="$3"
cd "${servicedir}" || echo "[ERROR] cd to $servicename failed with $?"
echo "[INFO] starting compilation in `pwd` ..."
LOG_FILE="$COMPILE_LOGPATH/project-$servicename.log"
mvn $mavengoal -l $LOG_FILE
if [ "$?" -eq 0 ] ; then
buildstatus=success
else
buildstatus=failure
fi
BUILDSTATUS+=([${servicename}]=${buildstatus})
echo "Adding BUILDSTATUS return value: $?"
echo "${BUILDSTATUS[${servicename}]}"
}
main
Your call of the function buildService() via command substitution ($()) is executed in a child process and cannot affect the BUILDSTATUS variable in the parent process, or as the bash(1) manual page puts it:
Command substitution, [...] are invoked in a subshell environment that is a duplicate of the shell environment [...] Changes made to the subshell environment cannot affect the shell's execution environment.
I would suggest outputting only the build status of the current maven call inside buildService() and adding that to BUILDSTATUS in the calling process. You currently have several other echo statements in there. You need to redirect them somewhere else, or they will mess up your assignment to build_status in the calling process.

Bash parameter expansion, indirect reference, and backgrounding

After struggling with this issue for several hours and searching here and failing to come up with a matching solution, it's time to ask:
In bash (4.3) I'm attempting to do a combination of the following:
Create an array
For loop through the values of the array with a command that isn't super fast (curl to a web server to get a value), so we background each loop to parallelize everything to speed it up.
Set the names of the values in the array to variables assigned to values redirected to it from a command via "read"
Background each loop and get their PID into a regular array, and associate each PID with the related array value in an associative array so I have key=value pairs of array value name to PID
Use "wait" to wait for each PID to exit 0 or throw an error telling us which value name(s) in the array failed to exit with 0 by referencing the associative array
I need to be able export all of the VAR names in the original array and their now-associated values (from the curl command results) because I'm sourcing this script from another bash script that will use the resulting exported VARs/values.
The reason I'm using "read" instead of just "export" with "export var=$(command)" or similar, is because when I background and get the PID to use "wait" with in the next for loop, I actually (incorrectly) get the PID of the "export" command which always exits 0, so I don't detect an error. When I use read with the redirect to set the value of the VAR (from name in the array) and background, it actually gets the PID of the command and I catch any errors in the next loop with the "wait" command.
So, basically, this mostly appears to work, except I realized the "read" command doesn't actually appear to be substituting the variable to the array name value properly in a way that the redirected command sends its output to that name in order to set the substituted VAR name to a value. Or, maybe the command is just entirely wrong so I'm not correctly redirecting the result of my command to a VAR name I'm attempting to set.
For what it's worth, when I run the curl | python command by hand (to pull the value and then parse the JSON output) it is definitely succeeding, so I know that's working, I just can't get the redirect to send the resulting output to the VAR name.
Here's a example of what I'm trying to do:
In parent script:
# Source the child script that has the functions I need
source functions.sh
# Create the array
VALUES=(
VALUE_A
VALUE_B
VALUE_C
)
# Call the function sourced from the script above, which will use the above defined array
function_getvalues
In child (sourced) script:
function_getvalues()
{
curl_pids=( )
declare -A value_pids
for value in "${VALUES[#]}"; do
read ${value} < <(curl -f -s -X GET http://path/to/json/value | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['value'])") & curl_pids+=( $! ) value_pids+=([$!]=${value})
done
for pid in "${curl_pids[#]}"; do
wait "$pid" && echo "Successfully retrieved value ${value_pids[$pid]} from Webserver." || { echo "Something went wrong retrieving value ${value_pids[$pid]}, so we couldn't get the output data needed from Webserver. Exiting." ; exit 1 ; }
done
}
The problem is that read, when run in the background, isn't connected to a standard in.[details] Consider this simplified, working example with comment how to cripple it:
VALUES=( VALUE_A VALUE_B )
for value in "${VALUES[#]}"; do
read ${value} < <(echo ${RANDOM}) # add "&" and it stops working
done
echo "VALUE_A=${VALUE_A}"
echo "VALUE_B=${VALUE_B}"
You might be able to do this with coproc, or using read -u with automatic file descriptor allocation, but really this is a job for temporary files:
tmpdir=$(mktemp -d)
VALUES=( VALUE_A VALUE_B )
for value in "${VALUES[#]}"; do
(sleep 1; echo ${RANDOM} > "${tmpdir}"/"${value}") &
done
for value in "${VALUES[#]}"; do
wait_file "${tmpdir}"/"${value}" && {
read -r ${value} < "${tmpdir}"/"${value}";
}
done
echo "VALUE_A=${VALUE_A}"
echo "VALUE_B=${VALUE_B}"
rm -r "${tmpdir}"
This example uses wait_file helper, but you might use inotifywait if you don't mind some dependencies on OS.

Shell script for checking if array is empty and restarting program if so

I want to make a shell script that keeps running to check if my two light weight web servers are still running, and restart them if one is not.
I can use the the command pgrep -f thin to get an array (?) of pids of my server called thin.
When this returned array has a count of zero I want to run a command which starts both servers:
cd [path_to_app] && bundle exec thin -C app_config.yml start
pgrep -f thin returns all the pids of the servers that are running. For example:
2354223425
I am new to shell scripting and don't know how to store the results of pgrep-f thin in an array. E.g.,
#!/bin/sh
while true
do
arr=$(pgrep -f thin) # /edited and now THIS WORKS!
#Then I want to check the length of the array and when it is empty run the above
#command, e.g.,
if [ ${#arr[#]} == 0 ]; then
cd [path_to_app] && bundle exec thin -C app_config.yml start
fi
#wait a bit before checking again
sleep 30
done
The first problem I have is that I cannot store the pgrep values in an array, and I am not sure if I can check against zero values. After that I am not sure if there are problems with the other code. I hope someone can help me!
You forgot to execute the command:
arr=($(pgrep -f thin))
[...] when it is empty
If you only check for emptyness, you can directly use the exit status of grep.
-q, --quiet, --silent
Quiet; do not write anything to standard output.
Exit immediately with zero status
if any match is found, even if an error was detected.

Executing shell script with system() returns 256. What does that mean?

I've written a shell script to soft-restart HAProxy (reverse proxy). Executing the script from the shell works. But I want a daemon to execute the script. That doesn't work. system() returns 256. I have no clue what that might mean.
#!/bin/sh
# save previous state
mv /home/haproxy/haproxy.cfg /home/haproxy/haproxy.cfg.old
mv /var/run/haproxy.pid /var/run/haproxy.pid.old
cp /tmp/haproxy.cfg.new /home/haproxy/haproxy.cfg
kill -TTOU $(cat /var/run/haproxy.pid.old)
if haproxy -p /var/run/haproxy.pid -f /home/haproxy/haproxy.cfg; then
kill -USR1 $(cat /var/run/haproxy.pid.old)
rm -f /var/run/haproxy.pid.old
exit 1
else
kill -TTIN $(cat /var/run/haproxy.pid.old)
rm -f /var/run/haproxy.pid
mv /var/run/haproxy.pid.old /var/run/haproxy.pid
mv /home/haproxy/haproxy.cfg /home/haproxy/haproxy.cfg.err
mv /home/haproxy/haproxy.cfg.old /home/haproxy/haproxy.cfg
exit 0
fi
HAProxy is executed with user haproxy. My daemon has it's own user too. Both run with sudo.
Any hints?
According to this and that, Perl's system() returns exit values multiplied by 256. So it's actually exiting with 1. It seems this happens in C too.
Unless system returns -1 its return value is of the same format as the status value from the wait family of system calls (man 2 wait). There are macros to help you interpret this status:
man 3 wait
Lists these macros and what they tell you.
A code of 256 probably means that the system command cannot locate the binary to run it. Remember that it may not be calling bash and that it may not have paths setup. Try again with full paths to the binaries!
I have the same problem when call script that contains `kill' command in a daemon.
The daemon must have closed the stdout, stderr...
Use something like system("scrips.sh > /dev/null") should work.

Resources