Shell script for checking if array is empty and restarting program if so - arrays

I want to make a shell script that keeps running to check if my two light weight web servers are still running, and restart them if one is not.
I can use the the command pgrep -f thin to get an array (?) of pids of my server called thin.
When this returned array has a count of zero I want to run a command which starts both servers:
cd [path_to_app] && bundle exec thin -C app_config.yml start
pgrep -f thin returns all the pids of the servers that are running. For example:
2354223425
I am new to shell scripting and don't know how to store the results of pgrep-f thin in an array. E.g.,
#!/bin/sh
while true
do
arr=$(pgrep -f thin) # /edited and now THIS WORKS!
#Then I want to check the length of the array and when it is empty run the above
#command, e.g.,
if [ ${#arr[#]} == 0 ]; then
cd [path_to_app] && bundle exec thin -C app_config.yml start
fi
#wait a bit before checking again
sleep 30
done
The first problem I have is that I cannot store the pgrep values in an array, and I am not sure if I can check against zero values. After that I am not sure if there are problems with the other code. I hope someone can help me!

You forgot to execute the command:
arr=($(pgrep -f thin))
[...] when it is empty
If you only check for emptyness, you can directly use the exit status of grep.
-q, --quiet, --silent
Quiet; do not write anything to standard output.
Exit immediately with zero status
if any match is found, even if an error was detected.

Related

Stop a running c program when this script detects a change in the file

does anyone know how I could edit this bash script to stop the current c programming running and start it even if the c program is doing something still. Currently if the program is finished it will recompile and rerun the c program but if the program is doing something still when I save the source code it wont stop it and restart
im on linux mint if that changes anything
#!/bin/bash
read -p 'Which c file to watch?: ' file_name
old_file_sig=$(stat -c %Z $file_name)
while true
do
new_file_sig=$(stat -c %Z c.c)
if [[ "$new_file_sig" != "$old_file_sig" ]]; then
gcc $file_name -o $file_name.o && ./$file_name.o
old_file_sig=$new_file_sig
fi
sleep 1
done
Keeping the same polling method (checking every second whether the source code has changed, and if so recompile and exec), I would just
run program in background, so your script can still continue polling for change while program runs
kills every background jobs (from this script only, of course) before starting a new one
#!/bin/bash
read -p 'Which c file to watch?: ' file_name
old_file_sig=$(stat -c %Z $file_name)
while true
do
new_file_sig=$(stat -c %Z c.c)
if [[ "$new_file_sig" != "$old_file_sig" ]]; then
expid=$(jobs -p -r)
[[ "$expid" ]] && kill $expid
gcc $file_name -o $file_name.o && { ./$file_name.o & }
old_file_sig=$new_file_sig
fi
sleep 1
done
In the (unlikely) event if program has terminated between the moment when jobs -p -r get its pid, and the moment when kill killed it, you may get an error message from kill.
You could also replace those 2 lines by a silent kill
#!/bin/bash
if [[ "$new_file_sig" != "$old_file_sig" ]]; then
kill $(jobs -p -r) 2> /dev/null
gcc $file_name -o $file_name.o && { ./$file_name.o & }
old_file_sig=$new_file_sig
fi
It kills everytime the background jobs. Even when there is none, which would makes kill complains about it, but we redirect complains to /dev/null.
Note that I tried to answer staying as close as your initial solution I could. I doesn't mean that I find this solution optimal.
Firstly, it is quite strange to name an executable file.o as you seem to do. .o files are object files for separated compilation. They are meant to be linked afterward to form an executable file (usually .exe on windows, and without any extension on linux, tho this is just a convention on linux, and nothing forbids you to call executable files file.o or file.png if you wish to do so)
Secondly, the polling is generally speaking a bad idea. A bad idea that I often use myself, because of easiness, or because I consider the cpu time wasted negligible. But it is always worth wondering if it is avoidable.
Here, a solution based on inotifywait for example avoids it
#!/bin/bash
read -p 'Which c file to watch?: ' file_name
while true
do
gcc $file_name -o $file_name.o && { ./$file_name.o & }
inotifywait -qq -e modify $file_name
kill $(jobs -p -r) 2> /dev/null
done
Explanation
I start the process first, so that when starting script, program is run without waiting for file modification
They, regardless of whether the program in background has finished or not, inotifywait stays blocked until $file_name has been modified
When it is, kill kills background job if there is any (so if modification of $file_name occurred before the program run was finished). Otherwise it will silently complains in /dev/null
a new loop compiles and starts the program again,...

Bash parameter expansion, indirect reference, and backgrounding

After struggling with this issue for several hours and searching here and failing to come up with a matching solution, it's time to ask:
In bash (4.3) I'm attempting to do a combination of the following:
Create an array
For loop through the values of the array with a command that isn't super fast (curl to a web server to get a value), so we background each loop to parallelize everything to speed it up.
Set the names of the values in the array to variables assigned to values redirected to it from a command via "read"
Background each loop and get their PID into a regular array, and associate each PID with the related array value in an associative array so I have key=value pairs of array value name to PID
Use "wait" to wait for each PID to exit 0 or throw an error telling us which value name(s) in the array failed to exit with 0 by referencing the associative array
I need to be able export all of the VAR names in the original array and their now-associated values (from the curl command results) because I'm sourcing this script from another bash script that will use the resulting exported VARs/values.
The reason I'm using "read" instead of just "export" with "export var=$(command)" or similar, is because when I background and get the PID to use "wait" with in the next for loop, I actually (incorrectly) get the PID of the "export" command which always exits 0, so I don't detect an error. When I use read with the redirect to set the value of the VAR (from name in the array) and background, it actually gets the PID of the command and I catch any errors in the next loop with the "wait" command.
So, basically, this mostly appears to work, except I realized the "read" command doesn't actually appear to be substituting the variable to the array name value properly in a way that the redirected command sends its output to that name in order to set the substituted VAR name to a value. Or, maybe the command is just entirely wrong so I'm not correctly redirecting the result of my command to a VAR name I'm attempting to set.
For what it's worth, when I run the curl | python command by hand (to pull the value and then parse the JSON output) it is definitely succeeding, so I know that's working, I just can't get the redirect to send the resulting output to the VAR name.
Here's a example of what I'm trying to do:
In parent script:
# Source the child script that has the functions I need
source functions.sh
# Create the array
VALUES=(
VALUE_A
VALUE_B
VALUE_C
)
# Call the function sourced from the script above, which will use the above defined array
function_getvalues
In child (sourced) script:
function_getvalues()
{
curl_pids=( )
declare -A value_pids
for value in "${VALUES[#]}"; do
read ${value} < <(curl -f -s -X GET http://path/to/json/value | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['value'])") & curl_pids+=( $! ) value_pids+=([$!]=${value})
done
for pid in "${curl_pids[#]}"; do
wait "$pid" && echo "Successfully retrieved value ${value_pids[$pid]} from Webserver." || { echo "Something went wrong retrieving value ${value_pids[$pid]}, so we couldn't get the output data needed from Webserver. Exiting." ; exit 1 ; }
done
}
The problem is that read, when run in the background, isn't connected to a standard in.[details] Consider this simplified, working example with comment how to cripple it:
VALUES=( VALUE_A VALUE_B )
for value in "${VALUES[#]}"; do
read ${value} < <(echo ${RANDOM}) # add "&" and it stops working
done
echo "VALUE_A=${VALUE_A}"
echo "VALUE_B=${VALUE_B}"
You might be able to do this with coproc, or using read -u with automatic file descriptor allocation, but really this is a job for temporary files:
tmpdir=$(mktemp -d)
VALUES=( VALUE_A VALUE_B )
for value in "${VALUES[#]}"; do
(sleep 1; echo ${RANDOM} > "${tmpdir}"/"${value}") &
done
for value in "${VALUES[#]}"; do
wait_file "${tmpdir}"/"${value}" && {
read -r ${value} < "${tmpdir}"/"${value}";
}
done
echo "VALUE_A=${VALUE_A}"
echo "VALUE_B=${VALUE_B}"
rm -r "${tmpdir}"
This example uses wait_file helper, but you might use inotifywait if you don't mind some dependencies on OS.

Catch invalid password on Sudo

Is there a way to trap/catch a invalid password when you use sudo? Basically I want to return a specific exit code if the sudo password is invalid. I don't want to avoid sudo or get around it, I just want to close/exit a script in a matter of my choosing.
Based on the man page of sudo(8), there is no easy way for evaluating the exact error reasons for a failure:
Exit Value
Upon successful execution of a program, the exit status from sudo will
simply be the exit status of the program that was executed.
Otherwise, sudo exits with a value of 1 if there is a
configuration/permission problem or if sudo cannot execute the given
command. In the latter case the error string is printed to the
standard error. If sudo cannot stat(2) one or more entries in the
user's PATH, an error is printed on stderr. (If the directory does not
exist or if it is not really a directory, the entry is ignored and no
error is printed.) This should not happen under normal circumstances.
The most common reason for stat(2) to return ''permission denied'' is
if you are running an automounter and one of the directories in your
PATH is on a machine that is currently unreachable.
The only "ugly" approach, which comes to my mind is to parse the result of stderr to determine the error reason:
#!/bin/bash
tmpfile=`mktemp`
sudo echo "dummy" 2>$tmpfile
if [ $? == 1 ]; then
if [ `cat $tmpfile | grep -x "sudo.*incorrect password attempts" | wc -l` == 1 ]; then
# exit due to failed password attempts
echo "too many failed password attempts"
else
# other reason, for instance configuration
echo "other reason"
fi
fi
rm $tmpfile
Note, however, that this approach is not upgrade-safe and moreover language-dependent: If a patch to sudo changes the text which is shown to the user in case of a wrong password, or the user logs on in a different language, this coding will not be able to handle this properly.

How to create file and put string on it using shellscript?

I want to create a file in /usr/share/applications/ and put a string on it.
What I have so far:
sudo touch /usr/share/applications/test.desktop
dentry="testing"
sudo echo $dentry >> /usr/share/applications/test.desktop
But this raise an error Permission Denied. What should I do to make it works?
You should create the file using your own pernissions, then sudo cp it into place.
The reason the second command doesn't work is that the redirection is set up by your shell, before sudo even runs. You could work around this by running sudo sh -c 'echo stuff >>file' but this is vastly more risk-prone than a simple sudo cp, and additionally has a race condition (if you run two concurrent instances of this script, they could end up writing the information twice to the file).

Executing shell script with system() returns 256. What does that mean?

I've written a shell script to soft-restart HAProxy (reverse proxy). Executing the script from the shell works. But I want a daemon to execute the script. That doesn't work. system() returns 256. I have no clue what that might mean.
#!/bin/sh
# save previous state
mv /home/haproxy/haproxy.cfg /home/haproxy/haproxy.cfg.old
mv /var/run/haproxy.pid /var/run/haproxy.pid.old
cp /tmp/haproxy.cfg.new /home/haproxy/haproxy.cfg
kill -TTOU $(cat /var/run/haproxy.pid.old)
if haproxy -p /var/run/haproxy.pid -f /home/haproxy/haproxy.cfg; then
kill -USR1 $(cat /var/run/haproxy.pid.old)
rm -f /var/run/haproxy.pid.old
exit 1
else
kill -TTIN $(cat /var/run/haproxy.pid.old)
rm -f /var/run/haproxy.pid
mv /var/run/haproxy.pid.old /var/run/haproxy.pid
mv /home/haproxy/haproxy.cfg /home/haproxy/haproxy.cfg.err
mv /home/haproxy/haproxy.cfg.old /home/haproxy/haproxy.cfg
exit 0
fi
HAProxy is executed with user haproxy. My daemon has it's own user too. Both run with sudo.
Any hints?
According to this and that, Perl's system() returns exit values multiplied by 256. So it's actually exiting with 1. It seems this happens in C too.
Unless system returns -1 its return value is of the same format as the status value from the wait family of system calls (man 2 wait). There are macros to help you interpret this status:
man 3 wait
Lists these macros and what they tell you.
A code of 256 probably means that the system command cannot locate the binary to run it. Remember that it may not be calling bash and that it may not have paths setup. Try again with full paths to the binaries!
I have the same problem when call script that contains `kill' command in a daemon.
The daemon must have closed the stdout, stderr...
Use something like system("scrips.sh > /dev/null") should work.

Resources