A DB administrator has given me the following commands to stop and then start an Oracle DB running on Debian(10):
stop db:
sudo su - <DBadminName>
lsnrctl status
lsnrctl stop
sqlplus / as sysdba
shut immediate;
start db:
sqlplus / as sysdba
startup;
lsnrctl start
We manage all of the servers in this infrastructure with Ansible, but we have thus far not done any direct interactions with the Oracle db , with Ansible.
We are being asked for additional automation and this stop/start is currently being done manually.
Can this db stop/start process be automated with Ansible?
You will find real world code examples on Github. Make sure you have an account.
# github code search function - 'ghc'
declare -f ghc
ghc ()
{
args=("$#");
SEARCH_STRING_PLUSSIGN=$(printf '%s' "${args[#]/%/+}");
open "$(echo "https://github.com/search?q=${SEARCH_STRING_PLUSSIGN%?}&type=code")"
}
# fire and code will be nicely highlighted in your browser
ghc ansible oracle shutdown immediate
Best of luck!
Related
I am trying to migrate a series of Trac projects originally hosted on CloudForge onto a new Bitnami virtual machine (debian with Trac stack installed).
The documentation on the Trac wiki regarding restoring from a backup is a little vague for me but suggests that I should be able to setup a new project
$ sudo trac-admin PROJECT_PATH initenv
stop the services from running
$ sudo /opt/bitnami/ctlscript.sh stop
copy the snapshot from the backup into the new project path and restart the services
$ sudo /opt/bitnami/ctlscript.sh start
and should be good to go.
Having done this (and worked through quite a few issues on the way) I have now got to the point where the browser page shows
Trac Error
TracError: Unable to check for upgrade of trac.db.api.DatabaseManager: TimeoutError: Unable to get database connection within 0 seconds. (OperationalError: unable to open database file)
When I setup the new project I note that I left the default (unedited) database string but I have no idea what database type was used for the original CloudForge Trac project i.e. is there an additional step to restore the database.
Any help would be greatly appreciated, thanks.
Edit
Just to add, the CloudForge was using Trac 0.12.5, new VM uses Trac 1.5.1. Not sure if this will be an issue?
Edit
More investigation and I'm now pretty sure that the CloudForge snapshot is not an SQLite (or other) database file - it looks like maybe a query type response as it starts and ends with;
BEGIN TRANSACTION;
...
COMMIT;
Thanks to anyone taking the time to read this but I think I'm sorted now.
After learning more about SQLite i discovered that the file sent by CloudForge was an sqlite DUMP of the database and was easy enough to migrate to a new database instance using the command line
$ sqlite3 location_of/new_database.db < dump_file.db
I think I also needed another prior step of removing the contents of the original new_database.db using the sqlite3 command line (just type sqlite3 in terminal)
$ .open location_of/new_database.db
$ BEGIN TRANSACTION;
$ DELETE FROM each_table_in_database;
$ COMMIT;
$ .exit
I then had some issue with credentials on the bitnami VM so needed to retrieve these (as per the bitnami documentation) using
$ sudo cat /home/bitnami/bitnami_credentials
and add this USER_NAME as a TRAC_ADMIN using
$ trac-admin path/to/project/ permission add USER_NAME TRAC_ADMIN
NOTE that pre and post this operation be sure to stop and re-start the bitnami services using
$ sudo /opt/bitnami/ctlscript.sh stop
$ sudo /opt/bitnami/ctlscript.sh start
I am the guy from Trac Users, you need to understand that the user isnt really stored in the db. You got some tables with columns holding the username but there is no table for an user. Looking at you post i think your setup used htdigest and then your user infos are in that credential file. if you cat it you should see something like
username:realmname:pwhash
i thing this is md5 as hash but it doesnt really matter for your prob. so if you want to make a new useryou have to use
htdigest [ -c ] passwdfile realm username
then you should use trac-admin to give the permission and at that point your user should be able to login.
Cheers
MArkus
I am new to docker concept. I have followed this to start and able to install db2 under docker environment.
My questions are
1)i need to load data in to this docker based db2. Data dump is from the db2 instance present remotely in linux machines.How can i load the dump from the docker db2 console?
2)every time i start docker quick start terminal i need to do all the procedure done in above link again. can't i just run db2 container and go to db2 console?
3) su - db2inst1(from link) is asking for password non of attempts to that password succeed(will give my machine admin password,db2 password etc).i need to restart or start again the process to go in to db2 container.
In an initialization script, I want to initialize a PostgreSQL directory, but don't need (and don't want) a running PostgreSQL server at this stage.
This would be a no-brainer if I just create the cluster (as user postgres):
initdb -D ...
However, I also need to create the PostgreSQL role, create the database and add some extensions (also as user postgres):
createuser someuser
createdb -O someuser somedb
echo 'CREATE EXTENSION xyz;' | psql somedb
The latter commands require a running PostgreSQL server. So this whole thing becomes quite messy:
initdb -D ...
# Start PostgreSQL server in background
... &
# Wait in a loop until PostgreSQL server is up and running
while ! psql -f /dev/null template1; do
sleep 0.5
done
createuser someuser
createdb -O someuser somedb
echo 'CREATE EXTENSION xyz;' | psql somedb
# Kill PostgreSQL server
kill ...
# Wait until the process is really killed
sleep 2
Especially the part that is waiting for the PostgreSQL server is never 100% reliable. I tried lots of variants and each of them failed in roughly 1 of 20 runs. Also, killing that process may not be 100% reliable in a simple shell script, let alone ensuring that it has stopped correctly.
I believe this is a standard problem that occurs in all use cases involving bootstrapping a server or preparing a VM image. So one would expect that in the year 2016, there should be some existing, realiable tooling for that. So my questions are:
Is there a simpler and more reliable way to achieve this?
For example, is there a way to run a PostgreSQL server in some special mode, where just starts up, executes certain SQL commands, and quits immediately after the last SQL command finished?
As a rough idea, is there something from the internal PostgreSQL test suite can be reused for this purpose?
You are looking for single-user mode.
If you start PostgreSQL like that, you are is a session connected as superuser that waits for SQL statements on standard input. As soon as you disconnect (with end-of-file), the server process is stopped.
So you could do it like this (with bash):
postgres --single -D /usr/local/pgsql/data postgres <<-"EOF"
CREATE USER ...;
CREATE DATABASE somedb ...;
EOF
postgres --single -D /usr/local/pgsql/data somedb <<-"EOF"
CREATE EXTENSION ...;
EOF
I need to implement an automatic trasfer of daily backups from one DB to another DB. Both DB's and apps are hosted on heroku.
I know this is possible if to do it manually from local machine with the command:
heroku pgbackups:restore DATABASE `heroku pgbackups:url --app production-app` --app staging-app
But this process should be automated and run not from local machine.
I have an idea to write a rake which will execute this command; and run this rake daily with the help of Heroku Scheduler add-on.
Any ideas how it is better to do this? Or maybe there is a better way for this task?
Thanks in advance.
I managed to solve the issue myself. It appeared to be not so complex. Here is the solution, maybe it'll be useful to somebody else:
1. I wrote a script which copies the latest dump from a certain server to the DB of the current server
namespace :backup do
desc "copy the latest dump from a certain server to the DB of the current server"
task restore_last_write_dump: :environment do
last_dump_url = %x(heroku pgbackups:url --app [source_app_name])
system("heroku pgbackups:restore [DB_to_target_app] '#{last_dump_url}' -a [target_app_name] --confirm [target_app_name]")
puts "Restored dump: #{last_dump_url}"
end
end
To avoid authenication upon each request to the servers, craete a file .netrc in the app root (see details here https://devcenter.heroku.com/articles/authentication#usage-examples)
Setup Scheduler add-on for heroku and add our rake task along with the frequency of its running.
That is all.
I'm writing a perl script in which I've to shutdown my mssql server ,do some operation and then I've to restart it.I know 1 way is to use netstat to stopt the service but I cann't use that. So I tried installing DBI and DBD::ODBC module.
More info here :Shutdown MSSQL server from perl script DBI
But when I trying to shutdown my server using this command
$dbh->prepare("SHUTDOWN WITH NOWAIT ");
It's not working for me :
I got this response from the community
SHUTDOWN permissions are assigned to members of the sysadmin and serveradmin fixed server roles, and they are not transferable. I'd consider it unlike(hopefully) that perl is run with this rights.
So please tell me is there a way to run the above command as these users ? or what can I do other than this . Note that I have a constraint tha tI cann't simply stop it as windows service.
If the scripts are executed through a web browser then the user executing the scripts will be defined by the web server. It will probably not be a good idea to fiddle with this user. Just leave things as they are.
What you can do is to create a Perl script that is being run by a privileged user on a consistent basis with CRON.
This script being run by CRON can check for specific content like a file which has been written by a script where the user executing the script has lesser privileges.
So the way it could work is as follows:
You execute browser.cgi through a browser to do a specific task.
browser.cgi writes instructions to a file.
Every 1 minute priveleged.cgi executes via CRON. (The root user could execute priveleged.cgi)
priveleged.cgi reads the file browser.cgi has written for instructions and starts and stops services according to the instructions.