How to initialize a PostgreSQL database without running the PostgreSQL server - database

In an initialization script, I want to initialize a PostgreSQL directory, but don't need (and don't want) a running PostgreSQL server at this stage.
This would be a no-brainer if I just create the cluster (as user postgres):
initdb -D ...
However, I also need to create the PostgreSQL role, create the database and add some extensions (also as user postgres):
createuser someuser
createdb -O someuser somedb
echo 'CREATE EXTENSION xyz;' | psql somedb
The latter commands require a running PostgreSQL server. So this whole thing becomes quite messy:
initdb -D ...
# Start PostgreSQL server in background
... &
# Wait in a loop until PostgreSQL server is up and running
while ! psql -f /dev/null template1; do
sleep 0.5
done
createuser someuser
createdb -O someuser somedb
echo 'CREATE EXTENSION xyz;' | psql somedb
# Kill PostgreSQL server
kill ...
# Wait until the process is really killed
sleep 2
Especially the part that is waiting for the PostgreSQL server is never 100% reliable. I tried lots of variants and each of them failed in roughly 1 of 20 runs. Also, killing that process may not be 100% reliable in a simple shell script, let alone ensuring that it has stopped correctly.
I believe this is a standard problem that occurs in all use cases involving bootstrapping a server or preparing a VM image. So one would expect that in the year 2016, there should be some existing, realiable tooling for that. So my questions are:
Is there a simpler and more reliable way to achieve this?
For example, is there a way to run a PostgreSQL server in some special mode, where just starts up, executes certain SQL commands, and quits immediately after the last SQL command finished?
As a rough idea, is there something from the internal PostgreSQL test suite can be reused for this purpose?

You are looking for single-user mode.
If you start PostgreSQL like that, you are is a session connected as superuser that waits for SQL statements on standard input. As soon as you disconnect (with end-of-file), the server process is stopped.
So you could do it like this (with bash):
postgres --single -D /usr/local/pgsql/data postgres <<-"EOF"
CREATE USER ...;
CREATE DATABASE somedb ...;
EOF
postgres --single -D /usr/local/pgsql/data somedb <<-"EOF"
CREATE EXTENSION ...;
EOF

Related

Ansible shutdown and startup of Oracle DB on Debian host

A DB administrator has given me the following commands to stop and then start an Oracle DB running on Debian(10):
stop db:
sudo su - <DBadminName>
lsnrctl status
lsnrctl stop
sqlplus / as sysdba
shut immediate;
start db:
sqlplus / as sysdba
startup;
lsnrctl start
We manage all of the servers in this infrastructure with Ansible, but we have thus far not done any direct interactions with the Oracle db , with Ansible.
We are being asked for additional automation and this stop/start is currently being done manually.
Can this db stop/start process be automated with Ansible?
You will find real world code examples on Github. Make sure you have an account.
# github code search function - 'ghc'
declare -f ghc
ghc ()
{
args=("$#");
SEARCH_STRING_PLUSSIGN=$(printf '%s' "${args[#]/%/+}");
open "$(echo "https://github.com/search?q=${SEARCH_STRING_PLUSSIGN%?}&type=code")"
}
# fire and code will be nicely highlighted in your browser
ghc ansible oracle shutdown immediate
Best of luck!

SQL * PLUS defining CONNECT_IDENTIFIER in glogin.sql file

I am connecting to a remote ORACLE DB with SQL * PLUS. I use command line like this to achieve it:
sqlplus user/password#1.1.1.1/orcl
Is it posible to store login parameters in
glogin.sql
file and just run command like:
sqlplus
?
It executes script in glogin.sql file automatically, but I have hard time setting login parameters there, ie:
USER="user";
PASSWORD="password";
SERVER_HOST="1.1.1.1";
SERVICE_NAME="orcl";
As described in the documentation, glogin.sql is a site profile so you only want that to contain commands that apply for all users; but there is a login.sql user profile for commands specific to a single user.
Anyway, I thought this might be possible (but inadvisable) by having the following line at the start of your login.sql:
connect user/password#1.1.1.1:1521/orcl
and then launching SQL*Plus with:
sqlplus /nolog
but as the docs also say, the profile files are run when /nolog is used and after connection, so this just causes a loop:
$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 22 09:20:41 2019
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
Connected.
SP2-0309: SQL*Plus command procedures may only be nested to a depth of 20.
Connected.
SP2-0309: SQL*Plus command procedures may only be nested to a depth of 20.
SP2-0309: SQL*Plus command procedures may only be nested to a depth of 20.
SQL>
You are actually connected successfully at that point:
...
Connected.
SP2-0309: SQL*Plus command procedures may only be nested to a depth of 20.
Connected.
SP2-0309: SQL*Plus command procedures may only be nested to a depth of 20.
SP2-0309: SQL*Plus command procedures may only be nested to a depth of 20.
SQL> select sysdate from dual;
SYSDATE
------------------
22-AUG-19
SQL>
but that is quite unpleasant.
A simpler alternative is to have a shell script wrapper around SQL*Plus, e.g. sql.sh:
USER="user";
PASSWORD="password";
SERVER_HOST="1.1.1.1";
SERVER_PORT="1521";
SERVICE_NAME="orcl";
/path/to/sqlplus ${USER}/${PASSWORD}#//${SERVER_HOST}:${SERVER_PORT}/${SERVICE_NAME}
and run that shell script instead.
But storing plain-text credentials in files is a bad idea, however well-protected you think the files are; and providing the password in the SQL*Plus command line means it is visible to other OS users via ps. You should at least remove the password from the script, which will prompt the user for that at login time. (Assuming the script will be run interactively, of course.)
You might also want to look at Oracle Wallet - there's an overview here.
And you could store the database settings - host, port and service name - in tnsnames.ora so you can connect more simply with a TNS alias (with or preferably without the password on the command line):
sqlplus user#my_alias
If you can't modify the global tnsnames.ora you can create your own version anywhere, and set the TNS_ADMIN environment variable to point to the directory that is in, so it picks up your file instead of the global one.
Read more.

Postgres :script to copy schema from internal server to deployment server ; without entering passwords at every step

I want to copy my database schema (just schema ;not data) from internal server to external server.
The problem I am facing is entering passwords at every step. Even though the steps to copy are pretty simple, I am not able to generate a script to automate teh whole process.
What I have till now:
on internal server:
pg_dump -C -s --file=schema.txt {name}
scp schema.txt prakhar#{external server}:/home/prakhar
on external server:
dropdb {name}
createdb {name}
psql --file=schema.txt {name}
At each step I am prompted for password.
I want to do two things:
1: Run the script from external server to fetch schema from internal ; or the other way around
2: Incorporate the password for both internal and external servers in a way the the script takes care of it for me.
I would recommend wrapping those commands in bash scripts, and in each one, prior to running the commands, add the following line:
export PGPASSWORD=<password>
Where is the password you want to use. This will export it as an environment variable which is available to the Postgres commands.
Here are other methods, including PGPASSWORD, to specify the Postgres password.
For *nix commands like scp, there are other options. One is sshpass. That would work well if you wanted to keep this all as a shell script.
Another option, and the one I would probably use for this sort of thing, would be to scrap the shell script wrapper and instead use something like Python's Fabric.
You can run commands using sudo, as well as commands on remote machines, as well as shell commands like the Postgres utility programs (you would want to set PGPASSWORD in the environment hash within Fabric for that).

Log Batch File Status/Timings to SQL Server

I would like to enhance an existing bat file to log execution duration to a SQL Server table. The current bat file has a single line that calls a command line utility.
I thought I would leverage something like this, SQL Statements in a Windows Batch File. Pseudo code:
StartTime = Now()
hyperioncommandlineshell.cmd /a:parm1 /b:parm2 /c:parm3
sqlcmd.exe -b -S myhost -E -d mydatabase -Q "Insert Into MyTable Values (Current_Timestamp, 'MyProcess', Now() - StartTime)" -W
Some questions:
The server that this bat file runs on doesn't have the SQL tools, and I see from this post that it does require an installation (you can't just copy over the sqlcmd.exe file). This will meet with resistance. Is there another way to execute a SQL statement from a batch file without having to install software?
I don't have experience with BAT files. Can someone provide guidance on how to get the duration of a process (like grabbing the start time, and calculating the difference at the end)?
I would probably try using another tool I'm more familiar with, but I'm trying to do this in bat so that the change only affects one existing object, and doesn't require additional objects.
Windows computers come with ODBC drivers already installed, so you likely have an ODBC driver for SQL Server. If so, then you might be able to get Microsoft's osql utility to run T-SQL statements from DOS. Here's the docs for it on MSDN:
http://msdn.microsoft.com/en-us/library/aa214012(v=SQL.80).aspx
It was designed for SQL Server 2000, so there may be some issues connecting to later versions of SQL Server, but it is worth a try. If it works, then you won't have to install anything special to connect to your SQL server (though you may need to create an ODBC data source name for the server...). From Windows Vista+, click Start and type ODBC to open the ODBC Data Source Editor.
Using SQLCMD will require that you install the Native Client, or at least SNAC (discussion thread: http://us.generation-nt.com/answer/how-install-only-sqlcmd-exe-utility-help-87134732.html) to simply run SQLCMD without installing the entire Native Client (though, SNAC still needs to be installed). I haven't heard of SNAC before, so that will take a bit of research. I assume installing anything will be met with the same resistance, so if you can overcome that resistance, installing the Native Client is probably your best bet.
As for the elapsed time. You can use %DATE% %TIME% to get the current date/time. So you could use something like the following to capture the start time, run your process and then capture the end time -- posting them all to the database:
set StartTime=%DATE% %TIME%
hyperioncommandlineshell.cmd /a:parm1 /b:parm2 /c:parm3
set EndTime=%DATE% %TIME%
sqlcmd.exe -b -S myhost -E -d mydatabase -Q "Insert Into MyTable Values ('%StartTime%', 'MyProcess', '%EndTime%')" -W
You won't be able to do the StartTime - EndTime computation with DOS itself, but you can store both the start and end times in the table an use SQL to do it.
The format of %DATE% and %TIME% are based on the format that the machine is setup to use. You can type echo %DATE% %TIME% at a DOS prompt to see how it is formatted for you. You will likely have to store these values in varchar fields since the format may not automatically convert to a datetime value. If it does automatically convert, then you could do the computation in the SQL statement from DOS, like this:
sqlcmd.exe -b -S myhost -E -d mydatabase -Q "Insert Into MyTable Values ('%EndTime%' - '%StartTime%', 'MyProcess')" -W
(FYI - I used your pseudo-code for all examples, so nothing is tested.)

How to execute same SP in 2 different connection

How to execute same SP in 2 different connection.
Ex: ALTER PROCEDURE test
...
....
I want to execute this SP in db called "database1" in 192.168.1.100 and same in 192.168.1.102.
I want this to be done using script not using the change connection window
You can use SQLCMD to run a .sql file against multiple server connections.
sqlcmd -S <ComputerName>\<InstanceName> -i <MyScript.sql> -d <database_name> -T
You can do that in SSMS Tools Pack using one of its features called "Run one script on multiple databases".
Editing this to add that this is a tiny and free add-in to SQL Server, that you would find extremely useful.

Resources