I am trying to write a shell script to check database connectivity. Within my script I am using the command
sqlplus uid/pwd#database-schemaname
to connect to my Oracle database.
Now I want to save the output generated by this command (before it drops to SQL prompt) in a temp file and then grep / find the string "Connected to" from that file to see if the connectivity is fine or not.
Can anyone please help me to catch the output and get out of that prompt and test whether connectivity is fine?
Use a script like this:
#!/bin/sh
echo "exit" | sqlplus -L uid/pwd#dbname | grep Connected > /dev/null
if [ $? -eq 0 ]
then
echo "OK"
else
echo "NOT OK"
fi
echo "exit" assures that your program exits immediately (this gets piped to sqlplus).
-L assures that sqlplus won't ask for password if credentials are not ok (which would make it get stuck as well).
(> /dev/null just hides output from grep, which we don't need because the results are accessed via $? in this case)
You can avoid the SQL prompt by doing:
sqlplus uid/pwd#database-schemaname < /dev/null
SqlPlus exits immediately.
Now just grep the output of the above as:
if sqlplus uid/pwd#database-schemaname < /dev/null | grep 'Connected to'; then
# have connectivity to Oracle
else
# No connectivity
fi
#! /bin/sh
if echo "exit;" | sqlplus UID/PWD#database-schemaname 2>&1 | grep -q "Connected to"
then echo connected OK
else echo connection FAIL
fi
Not knowing whether the "Connected to" message is put to standard output or standard error, this checks both. "qrep -q" instead of "grep... >/dev/null" assumes Linux.
#!/bin/bash
output=`sqlplus -s "user/pass#POLIGON.TEST " <<EOF
set heading off feedback off verify off
select distinct machine from v\\$session;
exit
EOF
`
echo $output
if [[ $output =~ ERROR ]]; then
echo "ERROR"
else
echo "OK"
fi
Here's a good option which does not expose the password on the command line
#!/bin/bash
CONNECT_STRING=<USERNAME>/<PASS>#<SID>
sqlplus -s -L /NOLOG <<EOF
whenever sqlerror exit 1
whenever oserror exit 1
CONNECT $CONNECT_STRING
exit
EOF
SQLPLUS_RC=$?
echo "RC=$SQLPLUS_RC"
[ $SQLPLUS_RC -eq 0 ] && echo "Connected successfully"
[ $SQLPLUS_RC -ne 0 ] && echo "Failed to connect"
exit SQLPLUS_RC
none of the proposed solutions works for me, as my script is executed in machines running several countries, with different locales, I can't simply check for one String simply because this string in the other machine is translated to a different language. As a solution I'm using SQLcl
https://www.oracle.com/database/technologies/appdev/sqlcl.html
which is compatible with all sql*plus scripts and allow you to test the database connectivity like this:
echo "disconnect" | sql -L $DB_CONNECTION_STRING > /dev/null || fail "cannot check connectivity with the database, check your settings"
#!/bin/sh
echo "exit" | sqlplus -S -L uid/pwd#dbname
if [ $? -eq 0 ]
then
echo "OK"
else
echo "NOT OK"
fi
For connection validation -S would be sufficient.
The "silent" mode doesn't prevent terminal output. All it does is:
-S Sets silent mode which suppresses the display of
the SQL*Plus banner, prompts, and echoing of
commands.
If you want to suppress all terminal output, then you'll need to do something like:
sqlplus ... > /dev/null 2>&1
This was my one-liner for docker container to wait until DB is ready:
until sqlplus -s sys/Oracle18#oracledbxe/XE as sysdba <<< "SELECT 13376411 FROM DUAL; exit;" | grep "13376411"; do echo "Could not connect to oracle... sleep for a while"; sleep 3; done
And the same in multiple lines:
until sqlplus -s sys/Oracle18#oracledbxe/XE as sysdba <<< "SELECT 13376411 FROM DUAL; exit;" | grep "13376411";
do
echo "Could not connect to oracle... sleep for a while";
sleep 3;
done
So it basically does select with magic number and checks that correct number was actually returned.
Related
I'm writing a unix script to check for database connectivity in a server. When my database connection gets errored out or when there is delay observed in connecting to the database, I want the output as "Not connected". In case it gets connected, my output should be "Connected". It is a Oracle databse.
When there is delay in database connectivity, my code is not working and my script gets hung. What changes should I make in my code so that it is able to handle both the conditions(when I get an error connecting to the database and when there is delay observed in connecting to the database)??
if sqlplus $DB_USER/$DB_PASS#$DB_INSTANCE< /dev/null | grep 'Connected to'; then
echo "Connectivity is OK"
else
echo "No Connectivity"
fi
The first thing to add to your code is a timeout. Checking database connectivity is not easy and there can be all kinds of problems in the various layers that your connection passes. A timeout gives you the option to break out of a hanging session and continue the task with reporting that the connection failed.
googleFu gave me a few nice examples:
Timeout a command in bash without unnecessary delay
If you are using Linux, you can use the timeout command to do what you want. So the following will have three outcomes, setting the variable RC as follows:
"Connected to" successful: RC set to 0
"Connected to" not found: RC set to 1
sqlplus command timed out after 5 minutes: RC set to 124
WAIT_MINUTES=5
SP_OUTPUT=$(timeout ${WAIT_MINUTES}m sqlplus $DB_USER/$DB_PASS#$DB_INSTANCE < /dev/null )
CMD_RC=$?
if [ $CMD_RC -eq 124 ]
then
ERR_MSG="Connection attempt timed out after $WAIT_MINUES minutes"
RC=$CMD_RC
else
echo $SP_OUTPUT | grep -q 'Connected to'
GREP_RC=$?
if [ $GREP_RC -eq 0 ]
then
echo "Connectivity is OK"
RC=0
else
ERR_MSG="Connectivity or user information is bad"
RC=1
fi
fi
if [ $RC -gt 0 ]
then
# Add code to send email with subject of $ERR_MSG and body of $SP_OUTPUT
echo Need to email someone about $ERR_MSG
fi
exit $RC
I'm sure there are several improvements to this, but this will get you started.
Briefly, we use the timeout command to wait the specified time for the sqlplus command to run. I separated out the grep as a separate command to allow the use of timeout and to allow more flexibility in checking additional text messages.
There are several examples on StackOverflow on sending email from a Linux script.
#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " "
echo " Hostname passed by user is " ${input[0]}
HOST="${input[0]}"
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#$HOST /bin/bash << ENDSSH
echo " Connected "
echo $count2
echo $input
pwd
echo $count2: ${input[$count2]}
nic=${input[$count2]}
echo $nic
echo $(ethtool "${nic}" |& grep 'Link' | awk '{print $3}')
ENDSSH
So Actually want to pass variable 'count2' and 'input' to remote SSH and execute.
But unfortunately it is not getting passed. It is not echoing anything after SSH.
Need help in this.!!
I have sshpass installed in sever.
code output:
[user#l07 ~]$ ./check.sh <hostname> eno6
Hostname passed by user is <hostname>
Connected
After SSH it only echos "Connected". I'm not sure why $count2 and $input is not echoing.
I tired with backlash '\$count2' but that is also not working. All possible combination tried even with quote and unquote of ENDSSH. Pls help
Any help will be really appreciated!!
You basically want to supply to your remote bash a HERE-document to be executed. This is tricky, since you need to "compose" the full text of this document before you can supply it to ssh. I would therefore separate the task into two parts:
Creating the HERE-document
Running it on ssh
This makes it easy for debugging to output the document between steps 1 and 2 and to visually inspect its contents for correctness. Don't forget that once this code runs on the remote host, it can't access any of your variables anymore, unless you have "promoted" them to the remote side using the means provided by ssh.
Hence you could start like this:
# Create the parameters you want to use
nic=${input[$count2]}
# Create a variable holding the content of the remote script,
# which interpolates your parameters
read -r -d '' remote_script << ENDSSH
echo "Connected to host \$(hostname)"
echo "Running bash version: \$BASH_VERSION"
....
ethtool "$nic" |& grep Link | awk '{ print $3 }'
ENDSSH
# Print your script for verification
echo "$remote_script"
# Submit it to the host
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no "user#$HOST" /bin/bash <<<"$remote_script"
You have to add escapes(\) here:
...
echo \$nic
...
echo \$(ethtool "\${nic}" |& grep 'Link' | awk '{print \$3}')
...
But why echoing this? Try it without echo
...
ethtool "\${nic}" |& grep -i 'Link' | awk '{print \$3}'
...
#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " Hostname passed by user is " "${input[0]}"
HOST="${input[0]}"
while [ $# -gt $count2 ]
do
sed -i 's/VALUE/'"${input[$count2]}"'/g' ./check.sh
sshpass -p '<pass>' scp ./check.sh user#"$HOST":/home/user/check.sh
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#"$HOST" "sh /home/user/check.sh && rm -rf /home/user/check.sh"
sed -i 's/'"${input[$count2]}"'/VALUE/g' ./check.sh
((count2++))
done
Found the another solution of this issue: It is working for me now !!!!
I wrote my entire logic which needs to be executed remotely in check.sh file and now replacing or storing the user input into this check.sh file and copying this file into remote server via scp and executing it over remotely and after successful execution removing this file from remote server and after ssh , again changing the user input to it's original value in local server using sed command.
Made this as dynamic script to work for multiple servers.
I am trying to figure out how to get my bash script to work. I have a the following command:
curl http://192.168.1.2/api/queue | grep -q test
I need it to repeat until the first command in the pipline succeeds (meaning the server responds) and the second command fails (meaning the pattern is not found or the queue is empty). I've tried a number of combinations but just can't seem to get it. I looked at using $PIPESTATUS but can't get it to function in a loop how I want. I've tried all kind of variations but can't get it to work. This is what I am currently trying:
while [[ "${PIPESTATUS[0]}" -eq 0 && "${PIPESTATUS[1]}" -eq 1 ]]
do curl http://192.168.1.2 | grep -q regular
echo "The exit status of first command ${PIPESTATUS[0]}, and the second command ${PIPESTATUS[1]}"
sleep 5
done
Although it's not really clear what kind of output is returned by the curl call, maybe you are looking for something like this:
curl --silent http://192.168.1.2 |while read line; do
echo $line |grep -q regular || { err="true"; break }
done
if [ -z "$err" ]; then
echo "..All lines OK"
else
echo "..Abend on line: '$line'" >&2
fi
Figured it out. Just had to re-conceptualize it. I couldn't figure it out strictly with a while or until loop but creating an infinite loop and breaking out of it when the condition is met worked.
while true
do curl http://192.168.1.2/api/queue | grep -q test
case ${PIPESTATUS[*]} in
"0 1")
echo "Item is no longer in the queue."
break;;
"0 0")
echo "Item is still in the queue. Trying again in 5 minutes."
sleep 5m;;
"7 1")
echo "Server is unreachable. Trying again in 5 minutes."
sleep 5m;;
esac
done
My nagios bash script works fine from the client's command line.
When I execute the same script through check_nrpe from the nagios server it returns the following message "CHECK_NRPE: No output returned from daemon."
Seems like a command in the bash script is not being executed.
arrVars=(`/usr/bin/ipmitool sensor | grep "<System sensor>"`)
#echo "Hello World!!"
myOPString=""
<Process array and determine string to echo along with exit code>
echo $myOPString
if [[ $flag == "False" ]]; then
exit 1
else
exit 0
fi
"Hello World" shows up on the nagios monitoring screen if I uncomment the echo statement.
I am new to linux but seems like the nagios user isn't able to execute ipmitool
arrVars=(`/usr/bin/ipmitool sensor | grep "<System sensor>"`)
Check the output of the above, You can echo it and check for the values. If it still does not work use another script to be called by this to get the output and assign it to a variable
exit 1
This refers to the Severity , So you would have to define different conditions where the severity changes
Add this line to the sudoers
nagios ALL=(root) NOPASSWD: /usr/bin/ipmitool
Then use "sudo /usr/bin/ipmitool" in your script
Here's my full bash script:
#!/bin/bash
logs="$HOME/sitedb_backups/log"
mysql_user="user"
mysql_password="pass"
mysql=/usr/bin/mysql
mysqldump=/usr/bin/mysqldump
tbackups="$HOME/sitedb_backups/today"
ybackups="$HOME/sitedb_backups/yesterday"
echo "`date`" > $logs/backups.log
rm $ybackups/* >> $logs/backups.log
mv $tbackups/* $ybackups/ >> $logs/backups.log
databases=`$mysql --user=$mysql_user -p$mysql_password -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases ; do
$mysqldump --force --opt --user=$mysql_user -p$mysql_password --databases $db | gzip > "$tbackups/$db.gz"
echo -e "\r\nBackup of $db successfull" >> $logs/backups.log
done
mail -s "Your DB backups is ready!" yourmail#gmail.com <<< "Today: "`date`"
DB backups of every site is ready."
exit 0
Problem is when i try to import it with mysql i am gettint error 1044 error connecting to oldname_db. When i opened sql file i have noticed on the first line CREATE command so it tries to create that database with the old name. How can i solve that problem?
SOLVED.
Using --databases parameter in my case is not necessary and because of --databases it was generating CREATE and USE action in the beginning of the sql file, hope it helps somebody else.
Use the --no-create-db option of mysqldump.
From man mysqldump:
--no-create-db, -n
This option suppresses the CREATE DATABASE statements that are
otherwise included in the output if the --databases or --all-databases
option is given.