Debian package drop datebase on purge - database

i am on creating a deb package for ubuntu
in my postinst script i use
# Configure database
dbc_mysql_createdb_encoding="UTF8"
if ! dbc_go portal3 $# ; then
echo 'Automatic configuration using dbconfig-db_version 2.0common failed!'
fi
to create the database what works fine.
In the postrm file i have:
echo "Remove database"
if [ -f /usr/share/debconf/confmodule ]; then
. /usr/share/debconf/confmodule
fi
if [ -f /usr/share/dbconfig-common/dpkg/postrm ]; then
. /usr/share/dbconfig-common/dpkg/postrm
if ! dbc_go portal3 $# ; then
echo 'Automatic configuration using dbconfig-common failed!'
fi
fi
but this don’t drop the created user or database.
There is no response on console or anything else that helps me to debug the issue.
Has anyone an idea how to drop database and user created while installation?

It also requires an prerm script like
#!/bin/sh
set -e
. /usr/share/debconf/confmodule
. /usr/share/dbconfig-common/dpkg/prerm.mysql
if ! dbc_go portal3 $# ; then
echo 'Automatic configuration using dbconfig-common failed!'
fi
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
exit 0
Otherwise dbconfig-commond dont know what database needs to be dropt.

Related

How can I cycle through an array in bash while also passing an argument to the script?

I have the following bash script I want to use as my "standard browser" with xdg-open.
It should prompt dmenu for me to choose the browser to open the url in.
Now xdg-open passes the url as an argument to the program (I suppose) and as I'm cycling through an array of browsers using the # symbol, it confuses this one with the argument (url) and errors on the dmenu command.
Is there a workaround to this problem or am I doing something completely wrong? --This problem was solved
#!/usr/bin/env bash
set -euo pipefail
_path="$(cd "$(dirname "${BASH_SOURCE[0]}")" && cd "$(dirname "$(readlink "${BASH_SOURCE[0]}" || echo ".")")" && pwd)"
if [[ -f "${_path}/_dm-helper.sh" ]]; then
# shellcheck disable=SC1090,SC1091
source "${_path}/_dm-helper.sh"
else
# shellcheck disable=SC1090
echo "No helper-script found"
fi
# script will not hit this if there is no config-file to load
# shellcheck disable=SC1090
source "$(get_config)"
main() {
if [ -t 0 ]
then
_url=$1
else
read _url
fi
_browser=$(printf '%s\n' "${!browsers[#]}" | sort | ${DMENU} 'Select browser: ') # "$#") ## Thx to #jhnc
_command=${browsers[${_browser}]}
if [[ -n ${_url} ]];then
$_command $_url
fi
}
[[ "${BASH_SOURCE[0]}" == "${0}" ]] && main "$#"
(get config) loads the dmenu command:
DMENU=dmenu -i -l 20 -p
as well as the array of browsers:
declare -A browsers
browsers[brave]="brave-browser"
browsers[firefox]="firefox"
browsers[opera]="opera"
browsers[badwolf]="badwolf"
from my config file.
Originally if i ran xdg-open "https://" or if I clicked on a url in some other program, brave was opened with on that site.
Now after xdg-settings set default-web-browser dmenu-script.desktop with the following .desktop file:
[Desktop Entry]
Version=1.0
Name=Dmenu Browser Script
GenericName=Web Browser
# Gnome and KDE 3 uses Comment.
Comment=Access the Internet
Exec=$HOME/.local/bin/dmenu-browser %U
StartupNotify=true
Terminal=false
Icon=brave-browser
Type=Application
Categories=Network;WebBrowser;
MimeType=application/pdf;application/rdf+xml;application/rss+xml;application/xhtml+xml;application/xhtml_xml;application/xml;image/gif;image/jpeg;image/png;image/webp;text/html;text/xml;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ipfs;x-scheme-handler/ipns;
Actions=new-window;new-private-window;
[Desktop Action new-window]
Name=New Window
Exec=$HOME/.local/bin/dmenu-browser
[Desktop Action new-private-window]
Name=New Incognito Window
Exec=$HOME/.local/bin/dmenu-browser --incognito
It only works if I execute xdg-open from my command line. (I modified the .desktop file of brave-browser, because I had no clue how to write one.)

In Bash script trying to pass local variable to SSH and then execute the other commands

#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " "
echo " Hostname passed by user is " ${input[0]}
HOST="${input[0]}"
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#$HOST /bin/bash << ENDSSH
echo " Connected "
echo $count2
echo $input
pwd
echo $count2: ${input[$count2]}
nic=${input[$count2]}
echo $nic
echo $(ethtool "${nic}" |& grep 'Link' | awk '{print $3}')
ENDSSH
So Actually want to pass variable 'count2' and 'input' to remote SSH and execute.
But unfortunately it is not getting passed. It is not echoing anything after SSH.
Need help in this.!!
I have sshpass installed in sever.
code output:
[user#l07 ~]$ ./check.sh <hostname> eno6
Hostname passed by user is <hostname>
Connected
After SSH it only echos "Connected". I'm not sure why $count2 and $input is not echoing.
I tired with backlash '\$count2' but that is also not working. All possible combination tried even with quote and unquote of ENDSSH. Pls help
Any help will be really appreciated!!
You basically want to supply to your remote bash a HERE-document to be executed. This is tricky, since you need to "compose" the full text of this document before you can supply it to ssh. I would therefore separate the task into two parts:
Creating the HERE-document
Running it on ssh
This makes it easy for debugging to output the document between steps 1 and 2 and to visually inspect its contents for correctness. Don't forget that once this code runs on the remote host, it can't access any of your variables anymore, unless you have "promoted" them to the remote side using the means provided by ssh.
Hence you could start like this:
# Create the parameters you want to use
nic=${input[$count2]}
# Create a variable holding the content of the remote script,
# which interpolates your parameters
read -r -d '' remote_script << ENDSSH
echo "Connected to host \$(hostname)"
echo "Running bash version: \$BASH_VERSION"
....
ethtool "$nic" |& grep Link | awk '{ print $3 }'
ENDSSH
# Print your script for verification
echo "$remote_script"
# Submit it to the host
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no "user#$HOST" /bin/bash <<<"$remote_script"
You have to add escapes(\) here:
...
echo \$nic
...
echo \$(ethtool "\${nic}" |& grep 'Link' | awk '{print \$3}')
...
But why echoing this? Try it without echo
...
ethtool "\${nic}" |& grep -i 'Link' | awk '{print \$3}'
...
#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " Hostname passed by user is " "${input[0]}"
HOST="${input[0]}"
while [ $# -gt $count2 ]
do
sed -i 's/VALUE/'"${input[$count2]}"'/g' ./check.sh
sshpass -p '<pass>' scp ./check.sh user#"$HOST":/home/user/check.sh
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#"$HOST" "sh /home/user/check.sh && rm -rf /home/user/check.sh"
sed -i 's/'"${input[$count2]}"'/VALUE/g' ./check.sh
((count2++))
done
Found the another solution of this issue: It is working for me now !!!!
I wrote my entire logic which needs to be executed remotely in check.sh file and now replacing or storing the user input into this check.sh file and copying this file into remote server via scp and executing it over remotely and after successful execution removing this file from remote server and after ssh , again changing the user input to it's original value in local server using sed command.
Made this as dynamic script to work for multiple servers.

Nomad task getting killed

I have two tasks in task group
1) a db task to bring up a db and
2) the app that needs the db to be up.
Both start in parallel and the db tasks takes a lil bit time but by then the app recognizes that db is not up and kills the db task. Any solutions? Please advise.
It's somewhat common to have an entrypoint script that checks if the db is healthy. Here's a script i've used before:
#!/bin/sh
set -e
cmd="$*"
postgres_ready() {
if test -z "${NO_DB}"
then
PGPASSWORD="${RDS_PASSWORD}" psql -h "${RDS_HOSTNAME}" -U "${RDS_USERNAME}" -d "${RDS_DB_NAME}" -c '\l'
return $?
else
echo "NO_DB Postgres will pretend to be up"
return 0
fi
}
until postgres_ready
do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - continuing..."
exec "${cmd}"
You could save it as entrypoint.sh and run it with your application start script as the argument. eg: entrypoint.sh python main.py

mysqldump puts CREATE on the first line with mysqldump

Here's my full bash script:
#!/bin/bash
logs="$HOME/sitedb_backups/log"
mysql_user="user"
mysql_password="pass"
mysql=/usr/bin/mysql
mysqldump=/usr/bin/mysqldump
tbackups="$HOME/sitedb_backups/today"
ybackups="$HOME/sitedb_backups/yesterday"
echo "`date`" > $logs/backups.log
rm $ybackups/* >> $logs/backups.log
mv $tbackups/* $ybackups/ >> $logs/backups.log
databases=`$mysql --user=$mysql_user -p$mysql_password -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases ; do
$mysqldump --force --opt --user=$mysql_user -p$mysql_password --databases $db | gzip > "$tbackups/$db.gz"
echo -e "\r\nBackup of $db successfull" >> $logs/backups.log
done
mail -s "Your DB backups is ready!" yourmail#gmail.com <<< "Today: "`date`"
DB backups of every site is ready."
exit 0
Problem is when i try to import it with mysql i am gettint error 1044 error connecting to oldname_db. When i opened sql file i have noticed on the first line CREATE command so it tries to create that database with the old name. How can i solve that problem?
SOLVED.
Using --databases parameter in my case is not necessary and because of --databases it was generating CREATE and USE action in the beginning of the sql file, hope it helps somebody else.
Use the --no-create-db option of mysqldump.
From man mysqldump:
--no-create-db, -n
This option suppresses the CREATE DATABASE statements that are
otherwise included in the output if the --databases or --all-databases
option is given.

Script to change ownership of folders in Plesk vhosts

I'm looking for some help in creating a shell script in Linux to perform a batch ownership change for certain folders in a Plesk environment where the owner:group is apache:apache.
I want to change the owner:group to :psacln.
The FTP user can be ascertained by looking at the owner of the httpdocs folder.
^this is the section I'm having trouble with.
If I was to set all owners to be the same, I could do a one-line:
find /var/www/vhosts/*/httpdocs -user apache -group apache -exec chown user:psacln {} \;
Can anyone help plug the user in to this command?
Thanks
Figured it out... for those who may want to use it in the future:
for dir in /var/www/vhosts/*
do
dir=${dir%*/}
permissions=`stat -c '%U' ${dir##*/}/httpdocs`
find ${dir##*/}/httpdocs -user apache -group apache -exec chown $permissions {} \;
done
Since stat doesn't work on al unices in the same way, I thought I would share my script to set the ownership of all websites to the correct owners in Plesk (tested on Plesk 11, 11.5, 12 and 12.5):
cd /var/www/vhosts/
for f in *; do
if [[ -d "$f" && ! -L "$f" ]]; then
# Get necessary variables
FOLDERROOT="/var/www/vhosts/"
FOLDERPATH="/var/www/vhosts/$f/"
FTPUSER="$(ls -ld /var/www/vhosts/$f/ | awk '{print $3}')"
# Set correct rights for current website, if website has hosting!
cd $FOLDERPATH
if [ -d "$FOLDERPATH/httpdocs" ]; then
chown -R $FTPUSER:psacln httpdocs
chmod -R g+w httpdocs
find httpdocs -type d -exec chmod g+s {} \;
# Print success message
echo "Done... $FTPUSER is now correct owner of $FOLDERPATH."
fi
# Make sure we are back at the root, so we can continue looping
cd $FOLDERROOT
fi
done
\\\
Explanation of code:
Go to vhosts folder
Loop through websites
Store vhosts path, because we are using cdin a loop
If httpdocsfolders exists for the current website, than
set the correct rights of httpdocs and
all underlying folders
Show succes message
cd back to vhosts folder, so we can continue looping
\\\

Resources